Towards Object Prediction based on Hand Postures for Reach to Grasp Interaction | BibSonomy

Towards Object Prediction based on Hand Postures for Reach to Grasp Interaction
, , , , and .
ACM CHI Workshop: Touching the 3rd Dimension of CHI: Touching and Designing 3D User Intefaces, page 99-106. (2012)

Recently, traditional multi-touch surfaces are extended by stereoscopic displays and 3D tracking technology. While reaching and pointing tasks have a long tradition in human-computer interaction (HCI), the hand pre-shaping which usually accompanies them has rarely been considered. The Reach to Grasp task has been widely investigated by many neuropsychological and robotic research groups over the last few decades. We believe that subtle grasping hand postures in combination with stereoscopic multi-touch displays have the potential to improve multi-touch 3D user interfaces. We present a study that aims to identify if the intended object can be predicted in advance, relying only on detection of the hand posture.
  • @mcm
This publication has not been reviewed yet.

rating distribution
average user rating0.0 out of 5.0 based on 0 reviews
    Please log in to take part in the discussion (add own reviews or comments).