Publication Date:
2022
abstract:
In Assicurazioni Generali, an automatic decision-making model is used to check real-time multivariate time series and alert if a car crash happened. In such a way, a Generali operator can call the customer to provide first assistance. The high sensitivity of the model used, combined with the fact that the model is not interpretable, might cause the operator to call customers even though a car crash did not happen but only due to a harsh deviation or the fact that the road is bumpy. Our goal is to tackle the problem of interpretability for car crash prediction and propose an eXplainable Artificial Intelligence (XAI) workflow that allows gaining insights regarding the logic behind the deep learning predictive model adopted by Generali. We reach our goal by building an interpretable alternative to the current obscure model that also reduces the training data usage and the prediction time.
Iris type:
04.01 Contributo in Atti di convegno
Keywords:
Multivariate time series; Crash prediction; Explainability; Interpretable machine learning; Car insurance; Case study
List of contributors:
Nanni, Mirco
Full Text:
Book title:
Discovery Science