Active exploration enables us humans to construct a rich and coherent percept of our environment. By far the most natural way to move through the real world is via locomotion like walking or running. The same should also be true for environments created with virtual reality (VR) technology. Keeping such an active and dynamic ability to navigate through large-scale immersive virtual environments (VEs) is of great interest for many 3D applications demanding locomotion, such as tourism, architecture or interactive entertainment. However, today it is still mostly impossible to freely walk through VEs in order to actively explore them. The primary reason for this is the scientific and technological underdevelopment in this sector. While moving in the real world, sensory information such as vestibular, proprioceptive, and efferent copy signals as well as visual information create consistent multi-sensory cues that indicate one's own motion, i.e., acceleration, speed and direction of travel. VEs were initially restricted to visual displays, combined with interaction devices, e.g. joystick or mouse, for providing (often unnatural) inputs to generate self-motion. However, more and more research groups are investigating natural, multimodal methods of generating such self-motion in virtual reality. An obvious approach is to transfer the user's tracked head movements to changes of the camera in the virtual world by means of a one-to-one mapping. Then, a one meter movement in the real world is mapped to a one meter movement of the virtual camera in the corresponding direction in the VE. This technique has the drawback that the users' movements are restricted by a limited range of the tracking sensors and a rather small workspace in the real world. The size of the virtual world often differs from the size of the tracked laboratory space so that a straightforward implementation of omni-directional and unlimited walking is not possible. Thus, concepts for virtual locomotion methods are needed that enable walking over large distances in the virtual world while remaining within a relatively small space in the real world. In this tutorial we will present an overview about the development of locomotion interfaces for VEs ranging from desktop-based camera manipulations simulating walking, and different walking metaphors for VR-based environments to state-of-the-art hardware-based solutions that enable omni-directional and unlimited real walking through virtual worlds.