December 27, 2018
As part of our video series on virtual reality (VR), we recently spoke with the leaders of the MxR lab at the USC Institute for Creative Technologies and the Tech and Narrative lab at Pardee RAND Graduate School, which develop serious immersive education and training simulations:
Among those sharing their insights was the Tech & Narrative lab’s director, IEEE Member Todd Richmond, who highlighted some key distinctions between consumer VR devices and those used to prepare troops and others for very specific real-world scenarios.
“Fundamentally, for so-called serious applications and serious games, there’s a ground truth that you’re held accountable to. And on the entertainment side, you don’t have that; in fact, there’s a term for it – game physics – that you’re not held to that standard of Newtonian physics,” says Richmond.
Being removed from physics is often what we consumers crave – a sense of departure from the real world, a sense that anything is possible. And in that pursuit, we’re also increasingly adamant that the experiences look as fluid and polished as possible.
For Richmond and his team, that flair isn’t required to generate a meaningful program. In fact, it may have the opposite effect: “We’re big fans of abstraction, because really, at the core, we’re trying to understand what are the things you really need to train someone on. And all of the shiny objects and the beautiful graphics could actually get in the way.”
While researchers on the serious applications side refine for minimalism and effectiveness, consumer-facing companies are pushing the limits of VR graphics to create desirable new devices. Take this impressive example written about by IEEE Future Directions: a headset with a double display to create a sense of distance and focus.
By using two screens, one that shows a wide-angle view with low resolution and one that shows a smaller, high-resolution image, the two can merge and let the brain process vision as it normally does, resulting in a much more realistic experience. Advances like this could feasibly find their way into serious applications and games in due time.
Further research is being done on focal distance and the optimal brightness of virtual objects when it comes to making near-field depths feel accurate and believable in VR. There’s also new work on controlling VR environments exclusively using the brain, although the potential impact of this on real-world training is a bit less clear.
Richmond sums it up well: “We have hundreds, if not thousands, of years of figuring out how to represent image and represent story within a bounded screen environment. VR and AR essentially destroy those boundaries.” By removing the reference point that screens provided, we’re in a realm that’s quite different from TV and film, and it’ll take more time to figure out which technologies will allow us to get the most out of VR for serious and entertainment applications alike.