Document Type

Article

Publication Date

2023

DOI

10.1109/ACCESS.2023.3296805

Publication Title

IEEE Access

Volume

11

Pages

76629-76637

Abstract

Automatic Modulation Recognition (AMR) is one of the critical steps in the signal processing chain of wireless networks, which can significantly improve communication performance. AMR detects the modulation scheme of the received signal without any prior information. Recently, many Artificial Intelligence (AI) based AMR methods have been proposed, inspired by the considerable progress of AI methods in various fields. On the one hand, AI-based AMR methods can outperform traditional methods in terms of accuracy and efficiency. On the other hand, they are susceptible to new types of cyberattacks, such as model poisoning or adversarial attacks. This paper explores the vulnerabilities of an AI-based AMR model to adversarial attacks in both single-input-single-output and multiple-input-multiple-output scenarios. We show that these attacks can significantly reduce the classification performance of the AI-based AMR model, which highlights the security and robustness concerns. Therefore, we propose a widely used mitigation method (i.e., defensive distillation) to reduce the vulnerabilities of the model against adversarial attacks. The simulation results indicate that the AI-based AMR model can be highly vulnerable to adversarial attacks, but their vulnerabilities can be significantly reduced by using mitigation methods.

Original Publication Citation

Tang, H., Catak, F. O., Kuzlu, M., Catak, E., & Zhao, Y. (2023). Defending AI-based Automatic Modulation Recognition models against adversarial attacks. IEEE Access, 11, 76629-76637. https://doi.org/10.1109/ACCESS.2023.3296805

ORCID

0000-0002-8719-2353 (Kuzlu)

Share

COinS