Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • Persone
  • Pubblicazioni
  • Strutture
  • Competenze

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • Persone
  • Pubblicazioni
  • Strutture
  • Competenze
  1. Pubblicazioni

Detecting adversarial inputs by looking in the black box

Articolo
Data di Pubblicazione:
2019
Abstract:
The astonishing and cryptic effectiveness of Deep Neural Networks comes with the critical vulnerability to adversarial inputs - samples maliciously crafted to confuse and hinder machine learning models. Insights into the internal representations learned by deep models can help to explain their decisions and estimate their confidence, which can enable us to trace, characterise, and filter out adversarial attacks.
Tipologia CRIS:
01.01 Articolo in rivista
Keywords:
Adversarial example; Deep neural networks; Image classification; Adversarial image detection; Representation learning
Elenco autori:
Carrara, Fabio; Amato, Giuseppe; Falchi, Fabrizio
Autori di Ateneo:
AMATO GIUSEPPE
CARRARA FABIO
FALCHI FABRIZIO
Link alla scheda completa:
https://iris.cnr.it/handle/20.500.14243/365256
Link al Full Text:
https://iris.cnr.it//retrieve/handle/20.500.14243/365256/31745/prod_404617-doc_150368.pdf
Pubblicato in:
ERCIM NEWS
Journal
  • Dati Generali

Dati Generali

URL

https://ercim-news.ercim.eu/en116/special/detecting-adversarial-inputs-by-looking-in-the-black-box
  • Utilizzo dei cookie

Realizzato con VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)