Exploiting CNN layer activations to improve adversarial image classification
Contributo in Atti di convegno
Data di Pubblicazione:
2019
Abstract:
Neural networks are now used in many sectors of our daily life thanks to efficient solutions such instruments provide for diverse tasks. Leaving to artificial intelligence the chance to make choices on behalf of humans inevitably exposes these tools to be fraudulently attacked. In fact, adversarial examples, intentionally crafted to fool a neural network, can dangerously induce a misclassification though appearing innocuous for a human observer. On such a basis, this paper focuses on the problem of image classification and proposes an analysis to better insight what happens inside a convolutional neural network (CNN) when it evaluates an adversarial example. In particular, the activations of the internal network layers have been analyzed and exploited to design possible countermeasures to reduce CNN vulnerability. Experimental results confirm that layer activations can be adopted to detect adversarial inputs.
Tipologia CRIS:
04.01 Contributo in Atti di convegno
Keywords:
Adversarial images; neural networks; layer activations; adversarial detection
Elenco autori:
Carrara, Fabio; Amato, Giuseppe; Falchi, Fabrizio
Link alla scheda completa:
Link al Full Text:
Pubblicato in: