Data di Pubblicazione:
2021
Abstract:
Supervised classification models, such as SVM, aim at predicting the class membership
of the incoming samples. Malicious inputs are designed to cheat a vulnerable classifier,
leading to a wrong prediction. We focus our analysis on the search of the smallest
perturbations of samples producing a failure of the classification process. The novelty
of our approach is in the use of the zero-pseudo-norm, which consists in minimizing the
number of attributes to be modified. We come out with an optimization problem whose
objective function is a Difference of Convex functions (DC). We present the results of
some preliminary experiments.
Tipologia CRIS:
04.02 Abstract in Atti di convegno
Keywords:
Sparse Optimization; SVM; Adversarial machine learning
Elenco autori:
Astorino, Annabella
Link alla scheda completa: