Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills
  1. Outputs

Emerging Challenges and Perspectives in Deep Learning Model Security: A Brief Survey

Academic Article
Publication Date:
2023
abstract:
The widespread adoption of Artificial Intelligence and Machine Learning tools opens to security issues that can raise and occur when the underlying ML models integrated into advanced services. The models, in fact, can be compromised in both the learning and the deployment stage. In this work, we provide an overview of some strenuous security risks and concerns that can affect such models. Our focus is on the research challenges and defense opportunities of the underlying ML framework, when it is devised in specific contexts that can compromise its effectiveness. Specifically, the survey provides an overview of the following emerging topics: Model Watermarking, Information Hiding issues and defense opportunities, Adversarial Learning and model robustness, and Fairness-aware models.
Iris type:
01.01 Articolo in rivista
Keywords:
Neural Network fingerprinting; Neural Network watermarking; Data poisoning; Adversarial examples; Fairness; information hiding
List of contributors:
Manco, Giuseppe; Caviglione, Luca; Comito, Carmela; Guarascio, Massimo
Authors of the University:
CAVIGLIONE LUCA
COMITO CARMELA
GUARASCIO MASSIMO
MANCO GIUSEPPE
Handle:
https://iris.cnr.it/handle/20.500.14243/431479
Published in:
APPLIED SOFT COMPUTING (ONLINE)
Journal
  • Overview

Overview

URL

https://www.sciencedirect.com/science/article/pii/S2772941923000030
  • Use of cookies

Powered by VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)