Sunday, August 17, 2014

The Sensory Explosion

At last week's SIGGRAPH conference, I had the pleasure of contributing a "Sensory Explosion" presentation.on the "Sight, Sounds and Sensors" panel.

Below are the key slides and some speaker notes. Enjoy!


The panel featured several contributors:


Sensics has been doing VR for a long time. Historically, it has mostly focused on enterprise applications (government, academia, corporate) and it is considering how to best leverage its technologies and know-how into larger markets such as medical devices and consumer VR.



Traditionally, head-mounted displays had three key components: a display (or multiple displays), adaptation optics and most often an orientation sensor. Most of the efforts were focused on increasing resolution, improving field of view and designing better optics. The orientation sensor was necessary, but not the critical component.


Recently, we are seeing the evolution of the HMD as a sensory platform. On top of the traditional core, we see the emergence of new types of sensors: position trackers, eye tracking, cameras (either for augmented reality and/or depth sensing), biometric sensors, haptic feedback, sensors that perform real-time determination of hand and finger position and more. Increasingly, innovation is shifting to how to make these sensors deliver maximum performance,  lightest weight (after all, they are on the head), and utmost power-efficiency (both for heat dissipation reasons as well as battery life for portable systems)



Above and beyond these on-board sensors, VR applications can now access sensors that are external to the HMD platform. For instance, most users carry a phone that has its own set of sensors such as a GPS. Some might wear a fitness or be in a room where a Kinect or some other camera can provide additional information. These sensors pose an opportunity for application developers to know even more about what the user is doing.



Integrating all these sensors can be pretty complex pretty quickly. Above is a block diagram of the SmartGoggles(tm) prototype that was done by Sensics a few years ago. These days, there is a much greater variety of sensors, so what can be done about them?



I feel that getting a handle on the explosion of sensors requires a few things:
1. A way to abstract sensors, just like VRPN abstracted motion trackers.
2. A standardized way to discover which sensors are connected to the system.
3. An easy way to configure all these sensors, as well as store the configuration for quick retrieval
4. A way to map the various sensor events into high-level application events. Just like you might change the mapping of a the buttons on a gamepad, you should be able to decide what impact does a particular gesture, for instance, have on the application.

But beyond this "plumbing", what is really needed is a way to figure out the context of the user, to turn data from various sensors into higher-level information. For instance: turn the motion data from two hands into the realization that the user is clapping, or determine that a user is sitting down, or is excited, or happy or exhausted.

We live in exciting times with significant developments in display technologies, goggles and sensors. I look forward to seeing what the future holds, as well as to make my contribution to shaping it.

Monday, August 4, 2014

Positional tracking: "Outside-in" vs. "Inside-out"

Optical positional tracking for goggles uses a camera (or cameras) and a known set of markers to determine the position of the camera relative to the markers. Positional tracking can be done using the visible spectrum but is more commonly done using infra-red markers and a camera that is sensitive to IR light.

There are two main options:

  • Inside-out tracking: where the camera is based on the goggles and the IR markers are placed in stationary locations (e.g. on the computer monitor, on the wall, etc.)
  • Outside-in tracking: where the camera is placed in a stationary location and the IR markers are placed on the goggles.
In both cases, the targets sometimes flash in a way that is synchronized with the camera. This allows reducing power consumption for the targets and helps reduce tracking noise from IR sources that are not the targets.

Sensics dSight panoramic HMD with IR targets for "outside in" tracking
How do these approaches compare?

  • Tracking volume: in both cases, at least some of the targets need to be visible to the camera. When the user rotates the head, an "inside-out" system needs targets that are physically far apart. If the targets are, for instance, placed on the bezel of a notebook PC, it is easy to see how head rotation could easily take these targets out of the field of view of the camera. A wider lens could be used in the camera, but this would reduce the tracking precision as each camera pixel would now cover a greater physical space in the world. In the "outside-in" system, targets could be placed on most sides of the goggle, allowing reasonably large rotation while still having targets visible on the camera. Advantage: outside-in
  • Tracking inside an entire room: if we want to allow mobility within a room, an 'inside-out' system would require additional markers on the walls, whereas an 'outside-in' system would require additional cameras. Both systems would require room calibration to make sure the target and/or cameras are placed in a known position. Additional cameras require additional processing power. Slight advantage: inside-out
  • Where is data being processed? In "inside-out' tracking, the camera data is either processed on the goggle or the camera is connected to a computer that is either carried by the user or stationary and connected via a wire. In 'outside-in' tracking, the data is processed on a computer that could be stationary. Advantage: outside-in
  • Can this be used with a wireless goggle? If the goggle is not tethered to a computer, "inside-out" tracking requires that the data is either processed locally or that the camera signal is sent wirelessly to a base station. In contrast, an 'outside-in' approach does not require wireless data of the camera. At most, a synchronization signal can be sent to the IR LEDs to make sure they flash in sync with the camera. Advantage: outside-in
  • Ability to combine with augmented reality system. Sometimes, the goggle will already have an on-board camera (or cameras) for the purpose of augmented reality and/or 3D reconstruction. In that case, using the same camera for positional tracking may have some cost advantages if positional tracking can be used with visible targets or if the camera already has IR sensitivity. Advantage: inside-out
Note: tracking accuracy is also an important comparison parameter, but this is more difficult to generically compare across both approaches. Very often, accurate tracking is achieved not just through the camera data but also by integrating ("sensor fusion") rotational data and linear acceleration data from on-board sensors. The sensors used and the quality of the sensor fusion algorithm would determine which approach is better.

Bottom line: for most applications, an 'outside-in' approach would be better, and thus we expect to see a greater number of 'outside-in' solutions on the market.


What has been your experience?


For additional VR tutorials on this blog, click here
Expert interviews and tutorials can also be found on the Sensics Insight page here