Saturday, July 27, 2013

What is Geometric Distortion (and Why Should you Care)?

What is Geometric Distortion?

Geometric distortion is a common and important type of optical distortion that occurs in VR goggles as well as in other optical systems. In this post we will discuss the types of geometric distortion and ways to measure the distortion. There are additional types of optical distortions, such as chromatic aberration, and we will discuss some of them in future posts.

Geometric distortion results in straight lines not being seen as straight lines when viewed through the goggle optics.

What are common types of geometric distortion?

The two common types of distortion are barrel distortion and pincushion distortion. These are shown in the figure below. The left grid is the original image and next to it are pincusion distortion and barrel distortion.
Sourcr: Wikipedia

A barrel distortion is one where the perceived location of a point in space (e.g. the intersection of two of the grid lines) is farther away from the center relative to where it really is. A pincushion distortion is one where the perceived location of a point in space is closer from the center relative to where it really is. Both these distortions are often radial, meaning that the amount of distortion is a function of how far a point is relative to the optical axis (e.g. the center of the lens system). The reasons distortions are radial is that many optical systems have radial symmetry.

Geometric distortion and VR Goggles

Geometric distortion is inherent to lens design. Every lens or eyepiece have some geometric distortion, though it is sometimes not large enough to be noticeable. When designing an eyepiece for a VR goggle, some maximum allowable geometric distortion is often a design goal. Because VR eyepieces need to balance many other desires - minimal weight, image clarity, large eye relief (to allow using goggles while wearing glasses), large eye box (to accommodate left/right/up/down movements relative to the optimal eye position) - the distortion is just one of many parameters that need to be simultaneously optimized.
Photo of a test grid through goggle optics. Picture taken using iPhone camera
Why should you care? Geometric distortion is important for several reasons:
  • If left uncorrected, it changes the perceptions of objects in the virtual image. Straight lines appeared curved. Lengths and areas are distorted. 
  • In a binocular (two-eyed) system, there is an area of visual overlap between the two eyes, which is called binocular overlap. If an object is displayed in both eyes in this area, and if the distortion in one eye is different than the other (for instance, because the object's distance from center is different), a blurry image will often appear
  • Objects of constant size may appear to change size as they move through the visual field.

How is distortion measured?

Distortion is reported in percentage units. If a pixel is placed at a distance of 100 pixels (or mm or degrees or inches or whichever unit you prefer) and appears as if it at a distance of 110, the distortion at that particular point is (110-100)/100 = 10%.

During the process of optical design, distortion graphs are commonly viewed during the iterations of the design. For instance, consider the distortion graph below:

Distortion graph. Source: SPIE
In a perfect lens, the "x" marks should reside right on the intersection of the grid lines. In this particular lens, that is quite far from being the case.

Distortion can also be measured by showing a known target on the screen, capturing how this target appears through the optics and then using specialized software programs to determine the distortion graph. One instance where this is done is during the calibration of a multi-projector wall.

Many distortion functions can be represented as odd-degree polynomials, where 5th or 7th degree is typically sufficiently precise. In formulaic terms:
Typical geometric distortion function

where "r" is the original distance from the center of the image, "a","b","c","d" and "e" are constants and "R" is the apparent distance after the distortion introduced by the optical system. "a" is usually 0.

With any of the above techniques, the constant coefficients can be determined using curve-fitting calculations.

The above also serves a the key to fixing distortion. If it is desired to have a pixel appear to the user in a known distance "R" from the center of the screen, one could solve for "r" above and determine where to put that pixel. For instance, if a system has constant 10% radial distortion as in the example above, placing a pixel at distance 100 would appear as if it is at distance 110. However, placing a pixel at a distance of approximately 91 pixels from center would appear as if it is at distance 100.

The fact that most distortion functions are radial and polynomial also allows for empirical determination. For instance, Sensics has a software program which allows the user to change the coefficients of the polynomials while looking at simulated grid through an optical system. When the coefficients change, the grid changes and this can be done interactively until an approximate correction function for the distortion is discovered

What's next?

In the next post, we will cover several ways to fix or overcome geometric distortions.



For additional VR tutorials on this blog, click here
Expert interviews and tutorials can also be found on the Sensics Insight page here

Saturday, July 20, 2013

Interview on Redirected Walking with Professor Eric Hodgson

Prof. Eric Hodgson
My previous post regarding redirected walking generated a good bit of interest, so I decided to dive deeper into the subject by interviewing Prof. Eric Hodgson of Miami University in Ohio, a true expert on the subject.

Eric, thank you for speaking with me. For those that do not know you, please tell us who are you and what do you do?
I'm a psychologist and professor at Miami University (the original one in Ohio, not that *other* Miami). I split my time between the Psychology department -- where I use virtual environments to study spatial perception, memory, and navigation -- and the interdisciplinary Interactive Media Studies program, where I teach courses in 3D modeling, data visualization, and virtual reality development to students from all across the university. I help oversee two sister facilities at Miami, the HIVE (Huge Immersive Virtual Environment) and the SIVC (Smale Interactive Visualization Center). The HIVE is a HMD-based facility with an 1,100 square meter tracking area. The SIVC houses a 4-walled CAVE along with several other 3D projection systems, immersive desktops, development labs, and several motion-capture systems. The HIVE has been funded mostly by the National Science Foundation, the Army Research Office, and the Ohio Board of Regents. The Smale Center was started with a $1.75m gift from the late John Smale, a former CEO of Proctor & Gamble, which uses CAVEs and other visualization systems in their R&D cycle.
You are the director of Smale Interactive Visualization Center. What kind of work is being performed at the Center?
It's a multi-disciplinary center, and a large part of my job is to enable students and faculty from across to university to leverage VR for their work, especially if they don't have the skillset to do it themselves. We also work with regional, national, and international industry partners on specific projects. The work can vary widely, which I find interesting and encouraging -- VR is becoming a very general-purpose tool rather than a niche for a few narrow fields. One of our first projects was building an immersive, 3D Mandala for the Dali Lama for his visit to campus. We've also done motion capture of proper violin-playing arm motion for the music department, developed medical training simulations for the nursing program, developed experiments to study postural sway with psychology, done interactive virtual walk-thoughs of student-designed architectural projects, supported immersive game development, and done work on developing next-generation motion sensing devices and navigation interfaces. Not to mention a 3D visualization of 18th century poetry, which was a collaboration between the Center, the English department, Computer Science, and Graphic Design. I love my job. I also do a lot of tours, field trips, and workshops. When you have a CAVE, a zSpace, a pile of HMDs, and lots of other fun toys (um... I mean tools), you end up being a must-see stop on the campus tour.

A good portion of your research seems to be in the area of redirected walking. Can you explain, in a lay person’s terms, what is redirected walking?
In layman's terms, Redirected Walking is a way of getting people into walking in circles without them realizing it, while it looks like they are walking in a straight line visually. Virtual environments and game levels can be very big; tracking spaces in a lab are typically pretty small. Redirected walking lets immersed users double-back into the same physical space while traveling through a much larger virtual space. There are other techniques that can come into play, such as magnifying or compressing turns, or stretching distances somewhat, but the basic techniques are all aimed at getting people to double back into open physical space so they can keep walking in the virtual environment. It's a bit like the original holodeck on Star Trek... users go into a small room, it turns into an alternate reality, and suddenly they can walk for miles without hitting the walls.
What made you become interested in this area?

Necessity, mostly. I'm a psychologist, studying human spatial cognition and navigation. My colleagues and I use a lot of virtual environments and motion tracking to do our research. VEs allow us to have complete control over the spaces people are navigating, and we can do cool things like moving landmarks, de-coupling visual and physical sensory information, and creating geometrically impossible spaces for people to navigate through. Our old lab was a 10m X 10m room, with a slightly smaller tracking area. As a result, we were stuck studying, essentially, room-sized navigation. There are a lot of interesting questions we could address, though, if we could let people navigate through larger spaces. So, we outfitted a gymnasium (and later a bigger gymnasium) with motion tracking that we called the HIVE, for Huge Immersive Virtual Environment. We built a completely wearable rendering system with battery-powered HMDs, and viola... we could study, um, big-room sized navigation. Since that still wasn't enough, we started exploring Redirected Walking as a way to study truly large-scale navigation in VEs with natural walking.
It seems that one of the keys to successful redirection is providing visual cues that are imperceptible. Can you give us an example of the type of cues and their magnitude?
Some of the recent work we've done uses a virtual grocery store, so I'll use that as an example. Let's say you're immersed, and trying to walk down an isle and get to the milk cooler. I can rotate the entire store around an axis that's centered on your head, slowly, so that you'll end up veering in the direction I rotate the store (really we manipulate the virtual camera, but the end result is the same as rotating the store). The magnitude of the rotation scales up with movement speed in our algorithm, so if you walk faster -- and thus create more optic flow -- I can inject a little bit more course correction. The rotations tend to be on the order of 8 - 10 degrees per second. By comparison, when you turn and look around in an HMD, you can easily move a few hundred degrees per second. You could detect these kind of rotations easily if you were standing still, but while walking or turning there's enough optic flow, head bob, and jarring from foot impact that the adjustments get lost in all the movment. Our non-visual spatial senses (e.g., inertial sensing by the inner ear, kinesthetic senses from your legs, etc.) have just enough noise in them that the visuals still appear to match.
Are all the cues visual or are the auditory or other cues that can be used?
Right now the cues we use are primarily visual, and to a lesser extent auditory, but it's easy to add in other senses if you have the ability. Since 3D audio emanates from a particular location in the virtual world, not the physical world, rotating the visuals brings all of the audio sources along with it. An active haptics display could work the same way, or a scent generator. Redirected walking essentially diverts the virtual camera, so any multisensory display that makes calculations based on your position in the virtual world will still work as expected. Adding more sensory feedback just reinforces what your eyes are seeing and should strengthen the effect.
What are practical applications of redirected walking? Is there a case study of someone using redirected walking outside an academic environment?
A gymnasium is about the smallest space you can work with and still curve people around without them noticing, so this is never going to run in your living room. We do have a portable, wearable version of our system with accurate foot-based position tracking that can be taken out of lab and used in, say, a park or a soccer field. It's a bit tricky, though, since the user is essentially running around the great outdoors blindfolded. If you're a liability lawyer for a VR goggle manufacturer, that's the kind of use case that gives you nightmares, but redirected walking could actually work in the gaming market with the right safety protocols. For example, we have safety mechanisms built in to our own systems, which usually includes a sighted escort and an automated warning system when users approach a pre-defined boundary. This could work in a controlled theme park or arcade-type setting, or with home-users that use some common sense. I can also see this technique being useful in industry and military applications. For example, the portable backpack system could easily be used for mission rehearsal in some remote corner of the globe. A squad of soldiers could each wear their own simulation rig and have their own ad-hoc tracking area to move around in. Likewise, some industry simulations incorporate large spaces and can benefit from physical movement. One scenario that came up a few years ago related to training repair technicians for large oil refineries, which can cover a square mile or more. Standing in a CAVE and pushing the forward button on a joystick just doesn't give you the same experience of having to actually walk a thousand meters across the facility while carrying heavy equipment, and then making a mission-critical repair under time pressure. Redirected walking would increase the realism of the training simulation without requiring a mile-square tracking area. Finally, I can see this benefiting the K-12 education system. Doing a virtual field trip in the gym would be pretty cool, and a responsible teacher or two could be present to watch out for the kids' safety.
Can redirected walking be applicable to augmented reality scenarios or just to immersive virtual reality?
It really doesn't make sense with augmented reality, in which you want the real and virtual worlds to be as closely aligned as possible. With redirected walking, the relationship between the real and virtual diverges pretty quickly. If you're doing large-scale navigation in AR, such as overlaying underground geological formations through a drill site, you'll want to actually navigate across the corresponding real-world space. It could make sense in some AR game situations, but it would be hard to make any continual, subtle adjustments to the virtual graphics without making them move perceptably relative to the real-world surroundings.
Is this technique applicable also to multi-person scenarios?
Definitely, and that's something we're actively exploring now. As long as you're redirecting people, and effectively steering where they go in the real world, there's no reason not to take several immersed people in the same tracking space and weave them in and around each other. Or, as I mentioned above with our portable system, if you can reliably contain people to a certain physical space with redirection, you can spread people out across a field and let everyone have their own little region while traveling through extremely large VEs. Adding multiple users does add some unexpected complexities, however. Under normal conditions, for example, when two immersed users come face to face in the VE, they would also be face to face in the physical world, and they could talk to each other normally, or reach out and touch each other, or share tools, etc. With redirected walking, those same users could be tens or hundreds of meters apart in the real world, requiring some sort of VOIP solution. By the same token, someone who is a mile away virtually might actually be very close to you, and you could hear them talking but not be able to see them, leading to an Uncanny Valley scenario.
How large or how small can a physical space be to implement successful redirected walking? Can this be used in a typical living room?
The HIVE is about 25m across in its narrowest dimension, which is about as small as you'd want to go. This is definitely not living-room material, which is where devices like the Omni will thrive instead. A lot of the literature recommends a space with a minimum radius of 30m+, which I think is about right. We have to stop people occasionally who are on a collision course with one of the lab's walls. A slightly larger space would let us catch and correct those trajectories automatically instead of manually stopping and redirecting the user. One thing to note is that the required tracking space interacts a lot with how much you turn up the redirection -- higher levels of steering constrain people to a smaller space, but they also become more noticeable. The type of VE and the user's task can also play a role. It seems like close-in environments like our virtual store make redirection more perceptible than open, visually ambiguous VEs like a virtual forest.
How immersive does an experience need to be for redirected walking to be successful?

High levels of immersion definitely help, but I'm not sure there's a certain threshold for success or failure here. Redirection relies on getting people to focus on their location in the virtual world while ignoring where they are in the room, and to innately accept their virtual movement as being accurate, even though it's not. Anytime you're in a decent HMD with 6-DOF tracking, the immersion level is going to be fairly high anyways, so this turns out to be a fairly easy task. As long as redirection is kept at reasonably low levels, it has been shown to work without being noticed, without increasing simulator sickness, and without warping people's spatial perception or mental map of the space.
Can you elaborate a bit on plans for future research in this area?
Right now the focus in our lab is on implementing mutli-user redirection and on improving the steering algorithms we use. We're also looking at behavioral prediction and virtual environment structure to try and predict where people might go next, or where they can't go next. For example, if I know you're in a hallway and can't turn for the next 10m, I can let you walk parallel to a physical wall without fear that you'll turn suddenly and hit it. There's a lot of other research going on right now in other labs that explore the perceptual limits of the effect and alternative methods of redirecting people. For example, it's possible to use an effect called "change blindness" to essentially restructure any part of the environment that's out of a person's view. So, if I'm looking at something on a virtual desk, the door behind me might move from one wall to another, causing me to leave alter my course by 90 degrees when I move to a different area. There's also a lot of work that's been done on catching potential wall collisions and gracefully resetting the user without breaking immersion too much.
For those that want to learn more about redirected walking, what other material would you recommend?
I'd really recommend reading Sharif Razzaque's early work on the topic, much of which he published with Mary Whitton out of UNC Chapel HIll. (http://scholar.google.com/scholar?q=razzaque+redirected+walking&btnG=&hl=en&as_sdt=0%2C36)
I'd also recommend reading some of Frank Steinicke's recent work on the techniques and perceptable limits of redirection (http://img.uni-wuerzburg.de/personen/prof_dr_frank_steinicke/), or some of our lab's work comparing higher-level redirection strategies such as steering people towards a central point versus steering people onto an ideal orbit around the room (http://www.users.miamioh.edu/hodgsoep/publications.php).
Finally, there's a good book that just came out on Human Walking in Virtual Environments that contains several good chapters on redirection as well as a broader look at the challenges of navigating in VEs and the perceptual factors involved. (http://www.amazon.com/Human-Walking-Virtual-Environments-Applications/dp/1441984313).

Eric, thank you very much for speaking with me. I look forward to learning more about your research in the future.

Friday, July 12, 2013

Redirected walking can save you from running into your sofa

A man is lost in the forest. No compass. No map. No phone. No GPS. He decides to walk in a straight line until he reaches a road. He walks and walks and walks until he can walk no more. When his body is found and the path he took is analyzed, it turns out that he was not actually walking in a straight, but going round and round in a big circle. Subtle visual cues - whether the forest, the earth or something else - fooled him into walking in a circle even though he intended to walk in a straight line.

There is no happy end here, but this man did not die in vein. It turns out that this same concept - of subtle visual cues - can direct a person in a virtual environment to take a certain path instead of  path that could lead to a collision.This is referred to as redirected walking.

Imagine a gamer wearing a virtual reality goggle. The true promise of goggles is in their portability and freedom of motion. Yes, most goggle users today sit deskside near a computer, but many experiences would be so much better if the user could roam around in a room, walk over, lean, pick up objects and so forth. But if a room has physical constraints to it such as a wall or a sofa, the person immersed in the goggle can collide in a way that would completely disrupt the experience, not to mention his leg or the sofa.

I had the opportunity to speak this week with Eric Hodgson, director at the Smale Interactive Visualization Center at Miami University of Ohio. Dr Hodgson is one of the leading researchers working on various aspects of redirected walking.

We got to this topic when discussing occlusion (see my previous blog post). One advantage of goggles that are not fully occluded is that the wearer feels safer when walking around because they can see the floor, some obstacles as well as other people around them. The downside of partial occlusion is that it reduces the sense of immersion. Dr. Hodgson's work shows, amongst other things, that immersion does not have to be traded off with safety. He has subjects walking around in a gym or even outside on a football field, being significantly immersed in an HMD. The visual stimuli presented in the HMD causes them to walk in a physical path that is different than what they perceive it to be.

Here is an image of a subject wearing an HMD with a computer on his back, fearlessly walking outside:

Courtesy of Dr. Eric Hodson, Miami University of Ohio
The following graph is even more interesting:
Redirected walking - Courtesy of Dr. Eric Hodson, Miami University of Ohio

The red line shows the actual physical path that a subject took. The blue line (dash-dot-dash) shows the visual path, the path that the subject thought he was taking inside the virtual world. As you can see, the subject ends up being confined in a space that is relatively small compared with the actual virtual space. 

Dr. Hodgson's research covers many aspects of this: what kind of cues are imperceptible to the person yet cause her to change her path; how is spatial memory impacted by this process of redirected walking and more.

Why is this useful? This concept is applicable to interactive games in several ways:
  • It allows experiencing a large virtual world in spite of being constricted to a smaller physical space.
  • It helps avoid physical obstacles (e.g. the sofa)
  • It allows multiple people to be immersed in the same physical space without bumping into each other.
To read more about Dr. Hodgson's work, go to his publications page, and especially check out the 2013 Hodgson and Bachmann article.

Learn to master redirected walking, or find yourself stuck in the sofa.


Wednesday, July 10, 2013

To Occlude or not to Occlude?

A question came up on the Natalia Gameplay Youtube video a couple of days ago:

I have a question? since it doesn't cover the whole eye area can you be distracted by light and stuff coming through the sides. i love the ideal of a headset and cameras and stuff in the front of it to track hand movement but i just don't like the idea of there an opening on the sides ?
This brings up a nice opportunity to speak about occlusion (the blocking of light) in goggles. At Sensics, we have done it both ways: some products block pretty much all external light from coming into the goggle, making the user entirely focused on the image displayed inside, and some products allow some peripheral vision. For instance, two products that can be configured for identical resolution and field of view are:

piSight - not occluded
xSight - fully occluded
the xSight is based on a ski-goggle design with a mask that touches the face all around the edge of the goggle. The piSight, on the other hand, hangs the optics in front of the eyes using an over-the-head rail structure which is very comfortable (in spite of looking like a torture device).

What are the advantages of an occluded design (such as the xSight)?

  • Allows the user to completely focus on the displayed image
  • Increases display contrast by blocking outside light
  • Enhances the sense of immersion by blocking outside distractions

What are the advantages of a non-occluded design (such as the piSight)?

  • Better orientation in the physical space. Goggle allows peeking sideways or looking down to the floor or to find a keyboard underneath the goggle. If the user of the goggles is expected to substantially move around in a room, a non-occluded design will feel safer to the user.
  • If coordination with additional people is needed, easier to see where these people are and view their behavior and gestures. For instance, in an infantry training application, most goggles used are not occluded.
  • Easier access to vicinity of the eyes if there is a need to adjust devices such as built-in eye tracker
  • Easier to wear glasses. Most often, the difficulty in wearing glasses with goggles is not so much the eye relief (distance from the optics to the eyes) but rather the frame of the eyeglasses interfering with the enclosure of the goggles. A non-occluded design goes a long way to alleviate this problem.
In some instances, we tried to have the best of both worlds: a non-occluded design but with detachable blinders that allow to increase then occlusion when required.

In short - there is no right answer. Goggle design is about tradeoffs and the right choice depends on the requirements and applications. 

Friday, July 5, 2013

What we learned from focusing on 'Heavily Used HMDs'

Last month, my company set out to dive deeper into 'heavily used HMDs', wanting to explore where are the failure modes of HMDs and what to do about them. Here is what we did:

1. We launched a survey, asking thousands of users to tell us what they thoughts about HMD reliability, about the factors contributing to occasional breakdowns of HMD and what to do about them. The response was excellent and we were able to get nearly 200 questionnaires filled. The resultant report is now published and can be freely downloaded here and a sample graph is below

"Which parts of the HMDs are failing?" from the Sensics 2013 report on HMD reliability


2. Understanding how important HMDs are to the work our customers performed, we decided to offer them extra peace of mind by extending our warranty to three years (with some limitations). Read the announcement here.

3. For those customers that operate in harsh training conditions, we launched a Mil-Spec HMD that is designed to withstand rugged conditions including shock, vibration, pouring rain, extended temperatures and more.

4. To demonstrate that this HMD was as good as advertised, we had some fun taking it into the shower and riding wildly with it. See the videos below

"The Shower"



"The Ride"

What should we do next?