Data di Pubblicazione:
2007
Abstract:
Human-to-human conversation remain such a significant part of our
working activities because its naturalness. Multimodal interaction systems
combine visual information with voice, gestures and other modalities to provide
flexible and powerful dialogue approaches. The use of integrated multiple input
modes enables users to benefit from the natural approach used in human
communication. However natural interaction approaches may introduce interpretation
problems. This paper proposes a new approach to match a multimodal
sentence with a template stored in a knowledge base to interpret the multimodal
sentence and define the multimodal templates similarity. We have assumed to
map each multimodal sentence to the natural language one. The system then
provides the exact/approximate interpretation according to the template
similarity level.
Tipologia CRIS:
01.01 Articolo in rivista
Keywords:
Human-Computer-interaction; multimodality; sentence similarity.
Elenco autori:
Paolozzi, Stefano; Grifoni, Patrizia; Ferri, Fernando
Link alla scheda completa: