Data di Pubblicazione:
2018
Abstract:
In this paper, a computational approach is proposed and put into practice to assess the capability of children having had diagnosed Autism Spectrum Disorders (ASD) to produce facial expressions. The proposed approach is based on computer vision components working on sequence of images acquired by an off-the-shelf camera in unconstrained conditions. Action unit intensities are estimated by analyzing local appearance and then both temporal and geometrical relationships, learned by Convolutional Neural Networks, are exploited to regularize gathered estimates. To cope with stereotyped movements and to highlight even subtle voluntary movements of facial muscles, a personalized and contextual statistical modeling of non-emotional face is formulated and used as a reference. Experimental results demonstrate how the proposed pipeline can improve the analysis of facial expressions produced by ASD children. A comparison of system's outputs with the evaluations performed by psychologists, on the same group of ASD children, makes evident how the performed quantitative analysis of children's abilities helps to go beyond the traditional qualitative ASD assessment/diagnosis protocols, whose outcomes are affected by human limitations in observing and understanding multi-cues behaviors such as facial expressions.
Tipologia CRIS:
01.01 Articolo in rivista
Keywords:
quantitative facial expression analysis;; facial action units; ASD diagnosis and assessment; computer vision
Elenco autori:
Distante, Cosimo; Leo, Marco; Carcagni', Pierluigi; Mazzeo, PIER LUIGI; Spagnolo, Paolo
Link alla scheda completa:
Pubblicato in: