Below are the key slides and some speaker notes. Enjoy!
Above and beyond these on-board sensors, VR applications can now access sensors that are external to the HMD platform. For instance, most users carry a phone that has its own set of sensors such as a GPS. Some might wear a fitness or be in a room where a Kinect or some other camera can provide additional information. These sensors pose an opportunity for application developers to know even more about what the user is doing.
I feel that getting a handle on the explosion of sensors requires a few things:
1. A way to abstract sensors, just like VRPN abstracted motion trackers.
2. A standardized way to discover which sensors are connected to the system.
3. An easy way to configure all these sensors, as well as store the configuration for quick retrieval
4. A way to map the various sensor events into high-level application events. Just like you might change the mapping of a the buttons on a gamepad, you should be able to decide what impact does a particular gesture, for instance, have on the application.
But beyond this "plumbing", what is really needed is a way to figure out the context of the user, to turn data from various sensors into higher-level information. For instance: turn the motion data from two hands into the realization that the user is clapping, or determine that a user is sitting down, or is excited, or happy or exhausted.
We live in exciting times with significant developments in display technologies, goggles and sensors. I look forward to seeing what the future holds, as well as to make my contribution to shaping it.