Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills
  1. Outputs

From Explainable to Reliable Artificial Intelligence

Conference Paper
Publication Date:
2021
abstract:
Artificial Intelligence systems are characterized by always less interactions with humans today, leading to autonomous decision- making processes. In this context, erroneous predictions can have severe consequences. As a solution, we design and develop a set of methods derived from eXplainable AI models. The aim is to define "safety regions" in the feature space where false negatives (e.g., in a mobility scenario, prediction of no collision, but collision in reality) tend to zero. We test and compare the proposed algorithms on two different datasets (physical fatigue and vehicle platooning) and achieve quite different conclusions in terms of results that strongly depend on the level of noise in the dataset rather than on the algorithms at hand.
Iris type:
04.01 Contributo in Atti di convegno
Keywords:
reliable AI; logic learning machine; skope rules; explainable AI
List of contributors:
Ferretti, Melissa; Vaccari, Ivan; Orani, Vanessa; Narteni, Sara; Mongelli, Maurizio; Cambiaso, Enrico
Authors of the University:
CAMBIASO ENRICO
MONGELLI MAURIZIO
Handle:
https://iris.cnr.it/handle/20.500.14243/398825
  • Use of cookies

Powered by VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)