@flint63

Multimodal Sentence Similarity in Human-Computer Interaction Systems

, , and . Knowledge-Based Intelligent Information and Engineering Systems: KES 2007 -- WIRN 2007, 11th International Conference, KES 2007, XVII Italian Workshop on Neural Networks, Vietri sul Mare, Italy, volume 4693 of Lecture Notes in Artificial Intelligence, Springer, Berlin, (2007)
DOI: 10.1007/978-3-540-74827-4_51

Abstract

Human-to-human conversation remain such a significant part of our working activities because its naturalness. Multimodal interaction systems combine visual information with voice, gestures and other modalities to provide flexible and powerful dialogue approaches. The use of integrated multiple input modes enables users to benefit from the natural approach used in human communication. However natural interaction approaches may introduce interpretation problems. This paper proposes a new approach to match a multimodal sentence with a template stored in a knowledge base to interpret the multimodal sentence and define the multimodal templates similarity. We have assumed to map each multimodal sentence to the natural language one. The system then provides the exact/approximate interpretation according to the template similarity level.

Links and resources

Tags

community