Emerging Challenges and Perspectives in Deep Learning Model Security: A Brief Survey
Academic Article
Publication Date:
2023
abstract:
The widespread adoption of Artificial Intelligence and Machine Learning tools opens to security issues that can raise and occur when the underlying ML models integrated into advanced services. The models, in fact, can be compromised in both the learning and the deployment stage. In this work, we provide an overview of some strenuous security risks and concerns that can affect such models. Our focus is on the research challenges and defense opportunities of the underlying ML framework, when it is devised in specific contexts that can compromise its effectiveness. Specifically, the survey provides an overview of the following emerging topics: Model Watermarking, Information Hiding issues and defense opportunities, Adversarial Learning and model robustness, and Fairness-aware models.
Iris type:
01.01 Articolo in rivista
Keywords:
Neural Network fingerprinting; Neural Network watermarking; Data poisoning; Adversarial examples; Fairness; information hiding
List of contributors:
Manco, Giuseppe; Caviglione, Luca; Comito, Carmela; Guarascio, Massimo
Published in: