6G – sixth generation – is the latest cellular technology currently under development for wireless communication systems. In recent years, machine learning (ML) algorithms have been applied widely in various fields, such as healthcare, transportation, energy, autonomous cars, and many more. Those algorithms have also been used in communication technologies to improve the system performance in terms of frequency spectrum usage, latency, and security. With the rapid developments of ML techniques, especially deep learning (DL), it is critical to consider the security concern when applying the algorithms. While ML algorithms offer significant advantages for 6G networks, security concerns on artificial intelligence (AI) models are typically ignored by the scientific community so far. However, security is also a vital part of AI algorithms because attackers can poison the AI model itself. This paper proposes a mitigation method for adversarial attacks against proposed 6G ML models for the millimeter-wave (mmWave) beam prediction using adversarial training. The main idea behind generating adversarial attacks against ML models is to produce faulty results by manipulating trained DL models for 6G applications for mmWave beam prediction. We also present a proposed adversarial learning mitigation method’s performance for 6G security in mmWave beam prediction application a fast gradient sign method attack. The results show that the defended model under attack’s mean square errors (i.e., the prediction accuracy) are very close to the undefended model without attack.
Original Publication Citation
Catak, F. O., Kuzlu, M., Catak, E., Cali, U., & Ünal, D. (2022). Security concerns on machine learning solutions for 6G networks in mmWave beam prediction. Physical Communication, 52, 1-12, Article 101626. https://doi.org/10.1016/j.phycom.2022.101626
Catak, Ferhat Ozgur; Kuzlu, Murat; Catak, Evren; Cali, Umit; and Unal, Devrim, "Security Concerns on Machine Learning Solutions for 6G Networks in mmWave Beam Prediction" (2022). Engineering Technology Faculty Publications. 150.