Home Institution, City, State
Inter American University of Barranquitas, Puerto Rico
Adversarial training has proven to be one of the most successful ways to defend models against adversarial examples. This process consists of training a model with an adversarial example to improve the robustness of the model. In this experiment, Torchattacks, a Pytorch library made for importing adversarial examples more easily, was used to determine which attack was the strongest. Later on, the strongest attack was used to train the model and make it more robust against adversarial examples. The datasets used to perform the experiments were MNIST and CIFAR-10. Both datasets were put to the test using PGD, FGSM, and R+FGSM attacks. The results that will be discussed will prove that the PGD attack makes the MNIST, and CIFAR-10 model more robust. The results at the end show that this technique for defending models needs to be improved with more research.
Torchattacks, Adversarial Examples, Adversarial Training, Pytorch, Machine Learning, CIFAR-10, MNIST, Robustness
Artificial Intelligence and Robotics | Information Security | Programming Languages and Compilers | Theory and Algorithms
Matos Díaz, William S., "Using Torchattacks to Improve the Robustness of Models with Adversarial Training" (2021). Cybersecurity: Deep Learning Driven Cybersecurity Research in a Multidisciplinary Environment. 3.
Download Poster (408 KB)