Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • Persone
  • Pubblicazioni
  • Strutture
  • Competenze

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • Persone
  • Pubblicazioni
  • Strutture
  • Competenze
  1. Pubblicazioni

Emotion Classification from Speech and Text in Videos Using a Multimodal Approach

Articolo
Data di Pubblicazione:
2022
Abstract:
Emotion classification is a research area in which there has been very intensive literature production concerning natural language processing, multimedia data, semantic knowledge discovery, social network mining, and text and multimedia data mining. This paper addresses the issue of emotion classification and proposes a method for classifying the emotions expressed in multimodal data extracted from videos. The proposed method models multimodal data as a sequence of features extracted from facial expressions, speech, gestures, and text, using a linguistic approach. Each sequence of multimodal data is correctly associated with the emotion by a method that models each emotion using a hidden Markov model. The trained model is evaluated on samples of multimodal sentences associated with seven basic emotions. The experimental results demonstrate a good classification rate for emotions.
Tipologia CRIS:
01.01 Articolo in rivista
Keywords:
emotion classification; multimodal intera
Elenco autori:
Grifoni, Patrizia; Ferri, Fernando; Caschera, MARIA CHIARA
Autori di Ateneo:
CASCHERA MARIA CHIARA
FERRI FERNANDO
GRIFONI PATRIZIA
Link alla scheda completa:
https://iris.cnr.it/handle/20.500.14243/414841
Pubblicato in:
MULTIMODAL TECHNOLOGIES AND INTERACTION
Journal
  • Dati Generali

Dati Generali

URL

https://www.mdpi.com/2414-4088/6/4/28
  • Utilizzo dei cookie

Realizzato con VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)