Home Institution, City, State
Inter American University of Barranquitas, Puerto Rico
Major
Computer Science
Publication Date
Summer 2021
Abstract
Adversarial training has proven to be one of the most successful ways to defend models against adversarial examples. This process consists of training a model with an adversarial example to improve the robustness of the model. In this experiment, Torchattacks, a Pytorch library made for importing adversarial examples more easily, was used to determine which attack was the strongest. Later on, the strongest attack was used to train the model and make it more robust against adversarial examples. The datasets used to perform the experiments were MNIST and CIFAR-10. Both datasets were put to the test using PGD, FGSM, and R+FGSM attacks. The results that will be discussed will prove that the PGD attack makes the MNIST, and CIFAR-10 model more robust. The results at the end show that this technique for defending models needs to be improved with more research.
Keywords
Torchattacks, Adversarial Examples, Adversarial Training, Pytorch, Machine Learning, CIFAR-10, MNIST, Robustness
Disciplines
Artificial Intelligence and Robotics | Information Security | Programming Languages and Compilers | Theory and Algorithms
Recommended Citation
Matos Díaz, William S., "Using Torchattacks to Improve the Robustness of Models with Adversarial Training" (2021). Cybersecurity: Deep Learning Driven Cybersecurity Research in a Multidisciplinary Environment. 3.
https://digitalcommons.odu.edu/reu2021_cybersecurity/3
Files
Download Poster (408 KB)
Included in
Artificial Intelligence and Robotics Commons, Information Security Commons, Programming Languages and Compilers Commons, Theory and Algorithms Commons