Incollection,

Integrating Semantics into Multimodal Interaction Patterns

, and .
Machine Learning for Multimodal Interaction: 4th International Workshop, MLMI 2007, Brno, Czech Republic, June 28--30, 2007, Revised Selected Papers, volume 4892 of Lecture Notes in Computer Science, Springer, Berlin, (2008)
DOI: 10.1007/978-3-540-78155-4_9

Abstract

A user experiment on multimodal interaction (speech, hand position and hand shapes) to study two major relationships: between the level of cognitive load experienced by users and the resulting multimodal interaction patterns; and how the semantics of the information being conveyed affected those patterns. We found that as cognitive load increases, users' multimodal productions tend to become semantically more complementary and less redundant across modalities. This validates cognitive load theory as a theoretical background for understanding the occurrence of particular kinds of multimodal productions. Moreover, results indicate a significant relationship between the temporal multimodal integration pattern (7 patterns in this experiment) and the semantics of the command being issued by the user (4 types of commands), shedding new light on previous research findings that assign a unique temporal integration pattern to any given subject regardless of the communication taking place.

Tags

Users

  • @flint63

Comments and Reviews