Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills
  1. Outputs

Exploiting CNN layer activations to improve adversarial image classification

Conference Paper
Publication Date:
2019
abstract:
Neural networks are now used in many sectors of our daily life thanks to efficient solutions such instruments provide for diverse tasks. Leaving to artificial intelligence the chance to make choices on behalf of humans inevitably exposes these tools to be fraudulently attacked. In fact, adversarial examples, intentionally crafted to fool a neural network, can dangerously induce a misclassification though appearing innocuous for a human observer. On such a basis, this paper focuses on the problem of image classification and proposes an analysis to better insight what happens inside a convolutional neural network (CNN) when it evaluates an adversarial example. In particular, the activations of the internal network layers have been analyzed and exploited to design possible countermeasures to reduce CNN vulnerability. Experimental results confirm that layer activations can be adopted to detect adversarial inputs.
Iris type:
04.01 Contributo in Atti di convegno
Keywords:
Adversarial images; neural networks; layer activations; adversarial detection
List of contributors:
Carrara, Fabio; Amato, Giuseppe; Falchi, Fabrizio
Authors of the University:
AMATO GIUSEPPE
CARRARA FABIO
FALCHI FABRIZIO
Handle:
https://iris.cnr.it/handle/20.500.14243/379901
Full Text:
https://iris.cnr.it//retrieve/handle/20.500.14243/379901/56601/prod_422758-doc_160005.pdf
Published in:
PROCEEDINGS - INTERNATIONAL CONFERENCE ON IMAGE PROCESSING
Series
  • Overview

Overview

URL

https://ieeexplore.ieee.org/document/8803776
  • Use of cookies

Powered by VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)