Gaze and Peripheral Vision Analysis for Human-Environment Interaction: Applications in Automotive and Mixed-Reality Scenarios
M. Moniri. Fachrichtung Informatik, Universität des Saarlandes, Saarbrücken, Dissertation, (February 2018)
DOI: 10.22028/D291-27053
Abstract
This thesis studies eye-based user interfaces which integrate information about the user’s perceptual focus-of-attention into multimodal systems to enrich the interaction with the surrounding environment. We examine two new modalities: gaze input and output in the peripheral field of view. All modalities are considered in the whole spectrum of the mixed-reality continuum. We show the added value of these new forms of multimodal interaction in two important application domains: Automotive User Interfaces and Human-Robot Collaboration. We present experiments that analyze gaze under various conditions and help to design a 3D model for peripheral vision. Furthermore, this work presents several new algorithms for eye-based interaction, like deictic reference in mobile scenarios, for non-intrusive user identification, or exploiting the peripheral field view for advanced multimodal presentations. These algorithms have been integrated into a number of software tools for eye-based interaction, which are used to implement 15 use cases for intelligent environment applications. These use cases cover a wide spectrum of applications, from spatial interactions with a rapidly changing environment from within a moving vehicle, to mixed-reality interaction between teams of human and robots.
%0 Thesis
%1 Moniri18Phd
%A Moniri, Mohammad Mehdi
%C Saarbrücken
%D 2018
%K 01801 103 dfki book ai multimodal user interaction interface image analysis algorithm zzz.mmi
%R 10.22028/D291-27053
%T Gaze and Peripheral Vision Analysis for Human-Environment Interaction: Applications in Automotive and Mixed-Reality Scenarios
%X This thesis studies eye-based user interfaces which integrate information about the user’s perceptual focus-of-attention into multimodal systems to enrich the interaction with the surrounding environment. We examine two new modalities: gaze input and output in the peripheral field of view. All modalities are considered in the whole spectrum of the mixed-reality continuum. We show the added value of these new forms of multimodal interaction in two important application domains: Automotive User Interfaces and Human-Robot Collaboration. We present experiments that analyze gaze under various conditions and help to design a 3D model for peripheral vision. Furthermore, this work presents several new algorithms for eye-based interaction, like deictic reference in mobile scenarios, for non-intrusive user identification, or exploiting the peripheral field view for advanced multimodal presentations. These algorithms have been integrated into a number of software tools for eye-based interaction, which are used to implement 15 use cases for intelligent environment applications. These use cases cover a wide spectrum of applications, from spatial interactions with a rapidly changing environment from within a moving vehicle, to mixed-reality interaction between teams of human and robots.
@phdthesis{Moniri18Phd,
abstract = {This thesis studies eye-based user interfaces which integrate information about the user’s perceptual focus-of-attention into multimodal systems to enrich the interaction with the surrounding environment. We examine two new modalities: gaze input and output in the peripheral field of view. All modalities are considered in the whole spectrum of the mixed-reality continuum. We show the added value of these new forms of multimodal interaction in two important application domains: Automotive User Interfaces and Human-Robot Collaboration. We present experiments that analyze gaze under various conditions and help to design a 3D model for peripheral vision. Furthermore, this work presents several new algorithms for eye-based interaction, like deictic reference in mobile scenarios, for non-intrusive user identification, or exploiting the peripheral field view for advanced multimodal presentations. These algorithms have been integrated into a number of software tools for eye-based interaction, which are used to implement 15 use cases for intelligent environment applications. These use cases cover a wide spectrum of applications, from spatial interactions with a rapidly changing environment from within a moving vehicle, to mixed-reality interaction between teams of human and robots.},
added-at = {2018-03-21T10:37:36.000+0100},
address = {Saarbr\"{u}cken},
author = {Moniri, Mohammad Mehdi},
biburl = {https://www.bibsonomy.org/bibtex/21b552f1d76c2847dd1ac62adfe0505d2/flint63},
doi = {10.22028/D291-27053},
file = {SciDok SULB:2018/Moniri18Phd.pdf:PDF},
groups = {public},
interhash = {547cfacb7d850b9f1e3f337e70edc55c},
intrahash = {1b552f1d76c2847dd1ac62adfe0505d2},
keywords = {01801 103 dfki book ai multimodal user interaction interface image analysis algorithm zzz.mmi},
month = {#feb#},
school = {Fachrichtung Informatik, Universit\"{a}t des Saarlandes},
timestamp = {2018-04-16T12:07:02.000+0200},
title = {Gaze and Peripheral Vision Analysis for Human-Environment Interaction: Applications in Automotive and Mixed-Reality Scenarios},
type = {Dissertation},
username = {flint63},
year = 2018
}