Document Type

Conference Paper

Publication Date

2022

DOI

10.1145/3512527.3531373

Publication Title

ICMR '22: Proceedings of the 2022 International Conference on Multimedia Retrieval

Pages

451-461

Conference Name

ICMR '22: International Conference on Multimedia Retrieval, June 27-30, 2022, Newark, New Jersey

Abstract

Graph neural networks (GNNs) have enabled the automation of many web applications that entail node classification on graphs, such as scam detection in social media and event prediction in service networks. Nevertheless, recent studies revealed that the GNNs are vulnerable to adversarial attacks, where feeding GNNs with poisoned data at training time can lead them to yield catastrophically devastative test accuracy. This finding heats up the frontier of attacks and defenses against GNNs. However, the prior studies mainly posit that the adversaries can enjoy free access to manipulate the original graph, while obtaining such access could be too costly in practice. To fill this gap, we propose a novel attacking paradigm, named Generative Adversarial Fake Node Camouflaging (GAFNC), with its crux lying in crafting a set of fake nodes in a generative-adversarial regime. These nodes carry camouflaged malicious features and can poison the victim GNN by passing their malicious messages to the original graph via learned topological structures, such that they 1) maximize the devastation of classification accuracy (i.e., global attack) or 2) enforce the victim GNN to misclassify a targeted node set into prescribed classes (i.e., target attack). We benchmark our experiments on four real-world graph datasets, and the results substantiate the viability, effectiveness, and stealthiness of our proposed poisoning attack approach. Code is released in github.com/chao92/GAFNC.

Comments

© 2022 Copyright held by the owner/authors

This work is licensed under a Creative Commons Attribution International 4.0 License (CC BY 4.0).

Original Publication Citation

Jiang, C., He, Y., Chapman, R., & Wu, H. (2022). Camouflaged poisoning attack on graph neural networks. In Proceedings of the 2022 International Conference on Multimedia Retrieval (pp. 451-461) Association for Computing Machinery. https://doi.org/10.1145/3512527.3531373

ORCID

0000-0002-5357-6623 (He)

Share

COinS