JVRC Tutorial: Walking Experiences in Virtual Worlds
G. Bruder, B. Mohler, and G. Cirio. Course Notes of the Joint Virtual Reality Conference of EuroVR - EGVE - VEC ACM, (2010)accepted.
Active exploration enables us humans to construct a rich and coherent percept of our environment. By far the most natural way to move through the real world is via locomotion like walking or running. The same should also be true for computer generated three-dimensional environments. Keeping such an active and dynamic ability to navigate through large-scale virtual scenes is of great interest for many 3D applications demanding locomotion, such as tourism, architecture or interactive entertainment. However, today it is still mostly impossible to freely walk through computer generated environments in order to actively explore them. The primary reason for this is the scientific and technological underdevelopment in this sector. While moving in the real world, sensory information such as vestibular, proprioceptive, and efferent copy signals as well as visual information create consistent multi-sensory cues that indicate one's own motion, i.e., acceleration, speed and direction of travel. Computer graphics environments were initially restricted to visual displays, combined with interaction devices, e.g. joystick or mouse, for providing (often unnatural) inputs to generate self-motion. Nowadays, more and more interaction devices, e.g., Nintendo's Wii or Sony's EyeToy, enable intuitive and natural interaction. In this context, many research groups are investigating natural, multimodal methods of generating self-motion in virtual worlds based on these consumer hardware. An obvious approach is to transfer the user's tracked head movements to changes of the camera in the virtual world by means of a one-to-one mapping. Then, a one meter movement in the real world is mapped to a one meter movement of the virtual camera in the corresponding direction in the virtual environment (VE). This technique has the drawback that the user's movements are restricted by a limited range of the tracking sensors, e.g. optical cameras, and a rather small workspace in the real world. The size of the virtual world often differs from the size of the tracked space so that a straightforward implementation of omni-directional and unlimited walking is not possible. Thus, concepts for virtual locomotion methods are needed that enable walking over large distances in the virtual world while remaining within a relatively small space in the real world. In this tutorial we will present an overview about the development of locomotion interfaces for computer generated virtual environments ranging from desktop-based camera manipulations simulating walking, and different walking metaphors for virtual reality (VR)-based environments to state-of-the-art hardware-based solutions that enable omni-directional and unlimited real walking through virtual worlds.