Аннотация
Recently, traditional multi-touch surfaces are extended by
stereoscopic displays and 3D tracking technology. While
reaching and pointing tasks have a long tradition in
human-computer interaction (HCI), the hand pre-shaping
which usually accompanies them has rarely been
considered. The Reach to Grasp task has been widely
investigated by many neuropsychological and robotic
research groups over the last few decades. We believe
that subtle grasping hand postures in combination with
stereoscopic multi-touch displays have the potential to
improve multi-touch 3D user interfaces. We present a
study that aims to identify if the intended object can be
predicted in advance, relying only on detection of the
hand posture.
Пользователи данного ресурса
Пожалуйста,
войдите в систему, чтобы принять участие в дискуссии (добавить собственные рецензию, или комментарий)