Abstract
Natural language, a primary communication medium for humans, facilitates better human-machine interaction and could be an efficient means to use intelligent robots in a more flexible manner. In this paper, we report on our joint efforts at providing natural language access to the autonomous mobile two-arm robot Kamro. The robot is able to perform complex assembly tasks. To achieve autonomous behaviour, several camera systems are used for the perception of the environment during task execution. Since natural language utterances must be interpreted with respect to the robot's current environment the processing must be based on a referential semantics that is perceptually anchored. Considering localization expressions, we demonstrate how, on the one hand, verbal descriptions, and on the other hand, knowledge about the physical environment, i.e., visual and geometric information, can be connected to each other.
Users
Please
log in to take part in the discussion (add own reviews or comments).