Data di Pubblicazione:
2019
Abstract:
Classification techniques are widely used in security settings in which data can be deliberately manipulated by an adversary trying to evade detection and achieve some benefit. However, traditional classification systems are not robust to such data modifications. Most attempts to enhance classification algorithms in adversarial environments have focused on game theoretical ideas under strong underlying common knowledge assumptions, which are not actually realistic in security domains. We provide an alternative framework to such problems based on adversarial risk analysis which we illustrate with examples. Computational, implementation and robustness issues are discussed.
Tipologia CRIS:
01.01 Articolo in rivista
Keywords:
Classification; Bayesian methods; Adversarial machine learning; Influence diagrams; Robustness
Elenco autori:
Ruggeri, Fabrizio
Link alla scheda completa:
Pubblicato in: