Abstract
Any human-computer interface requires both a means of transducing information flowing from the person and a way of classifying this information in a form that can be used by an application program. Since several interface devices exploit the head movements of disabled people to control computers, this paper includes a discussion of existing technologies based on head movements. As an alternative to simple techniques based on pointing to classify this information, this paper studies the possibility of using a combination of pointing and movement gestures to control an application program. By using hidden Markov models to classify movements into 'yes', 'no' and spurious gestures, it was possible to control a simple graphics application program. Subsequent analysis showed that the hidden Markov models achieved a 74\% success rate.
Users
Please
log in to take part in the discussion (add own reviews or comments).