Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • People
  • Outputs
  • Organizations
  • Expertise & Skills
  1. Outputs

Adversarial attacks on graph-level embedding methods: a case study

Academic Article
Publication Date:
2022
abstract:
As the number of graph-level embedding techniques increases at an unprecedented speed, questions arise about their behavior and performance when training data undergo perturbations. This is the case when an external entity maliciously alters training data to invalidate the embedding. This paper explores the effects of such attacks on some graph datasets by applying different graph-level embedding techniques. The main attack strategy involves manipulating training data to produce an altered model. In this context, our goal is to go in-depth about methods, resources, experimental settings, and performance results to observe and study all the aspects that derive from the attack stage.
Iris type:
01.01 Articolo in rivista
Keywords:
Adversarial attacks; Adversarial machine learning; Graph embedding; Graph Neural Networks; Graph Classification
List of contributors:
Maddalena, Lucia; Giordano, Maurizio
Authors of the University:
GIORDANO MAURIZIO
MADDALENA LUCIA
Handle:
https://iris.cnr.it/handle/20.500.14243/419725
Published in:
ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE
Journal
  • Use of cookies

Powered by VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)