Inproceedings,

Visual Identity from Egocentric Camera Images for Head-Mounted Display Environments

, , , and .
Proceeding of VirtuaI Reality International Conference, page 289--290. IEEE Press, (2009)

Abstract

A number of researchers have reported that a fullyarticulated visual representation of oneself in an immersive virtual environment (IVE) has considerable impact on social interaction and the subjective sense of presence in the virtual world. Therefore, many approaches address this challenge and incorporate a virtual model of the user’s body in the VE. Usually, a fully-articulated visual identity or or socalled “virtual body” is manipulated according to user motions which are defined by feature points detected by a tracking system. Therefore, markers have to be attached to certain feature points as done, for instance, with full-body motion coats which have to be worn by the user. Such instrumentation is unsuitable in scenarios which involve multiple persons simultaneously or in which participants frequently change. Furthermore, individual characteristics such as skin pigmentation, hairiness or clothes are not considered by this procedure where the tracked data is always mapped to the same invariant 3D model. In this paper we present a software-based approach that allows to incorporate a realistic visual identity of oneself in the VE, which can be integrated easily into existing hardware setups. In our setup we focus on visual representation of a user’s arms and hands. The idea is to make use of images captured by cameras that are attached to video-see-through head-mounted displays. These egocentric frames can be segmented into foreground showing parts of the human body, i. e., the human’s hands, and background. Then the extremities can be overlayed with the user’s current view of the virtual world, and thus a high-fidelity virtual body can be visualized.

Tags

Users

  • @mcm

Comments and Reviews