Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills
  1. Outputs

Adversarial Machine Learning for Protecting Against Online Manipulation

Academic Article
Publication Date:
2022
abstract:
Adversarial examples are inputs to a machine learning system that result in an incorrect output from that system. Attacks launched through this type of input can cause severe consequences: for example, in the field of image recognition, a stop signal can be misclassified as a speed limit indication. However, adversarial examples also represent the fuel for a flurry of research directions in different domains and applications. Here, we give an overview of how they can be profitably exploited as powerful tools to build stronger learning models, capable of better-withstanding attacks, for two crucial tasks: fake news and social bot detection.
Iris type:
01.01 Articolo in rivista
Keywords:
social media; adversarial machine learning; disinformation
List of contributors:
Petrocchi, Marinella; Cresci, Stefano
Authors of the University:
CRESCI STEFANO
PETROCCHI MARINELLA
Handle:
https://iris.cnr.it/handle/20.500.14243/446871
  • Overview

Overview

URL

http://www.scopus.com/inward/record.url?eid=2-s2.0-85120572012&partnerID=q2rCbXpz
  • Use of cookies

Powered by VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)