Publication Date:
2007
abstract:
Human-to-human conversation remains such a significant part
of our working activities because of its naturalness. Multimodal
interaction systems combine visual information with voice,
gestures and other modalities to provide flexible and powerful
dialogue approaches. The use of integrated multiple input modes
enables users to benefit from the natural approach used in human
communication. However natural interaction approaches
introduce interpretation problems. In this paper is presented an
approach to interpret user's multimodal input. Starting from the
analysis of the different types of modalities' cooperation we take
into account the user's input behavior in order to better
approximate the resultant multimodal input sentence with the
user's intention. This multimodal sentence is transformed in a
natural language one and we provides an algorithm to calculate
the exact/approximate interpretation according to the sentence
similarity level with sentence templates stored in a predefined
knowledge base.
Iris type:
04.01 Contributo in Atti di convegno
List of contributors:
Paolozzi, Stefano; Grifoni, Patrizia; Ferri, Fernando
Book title:
Proceedings - SEKE 2007 - The 19 th International Conference on Software Engineering & Knowledge Engineering