Publication Date:
2019
abstract:
The astonishing and cryptic effectiveness of Deep Neural Networks comes with the critical vulnerability to adversarial inputs - samples maliciously crafted to confuse and hinder machine learning models. Insights into the internal representations learned by deep models can help to explain their decisions and estimate their confidence, which can enable us to trace, characterise, and filter out adversarial attacks.
Iris type:
01.01 Articolo in rivista
Keywords:
Adversarial example; Deep neural networks; Image classification; Adversarial image detection; Representation learning
List of contributors:
Carrara, Fabio; Amato, Giuseppe; Falchi, Fabrizio
Full Text:
Published in: