Home Institution, City, State

Inter American University of Barranquitas, Puerto Rico

Major

Computer Science

Publication Date

Summer 2021

Abstract

Adversarial training has proven to be one of the most successful ways to defend models against adversarial examples. This process consists of training a model with an adversarial example to improve the robustness of the model. In this experiment, Torchattacks, a Pytorch library made for importing adversarial examples more easily, was used to determine which attack was the strongest. Later on, the strongest attack was used to train the model and make it more robust against adversarial examples. The datasets used to perform the experiments were MNIST and CIFAR-10. Both datasets were put to the test using PGD, FGSM, and R+FGSM attacks. The results that will be discussed will prove that the PGD attack makes the MNIST, and CIFAR-10 model more robust. The results at the end show that this technique for defending models needs to be improved with more research.

Keywords

Torchattacks, Adversarial Examples, Adversarial Training, Pytorch, Machine Learning, CIFAR-10, MNIST, Robustness

Disciplines

Artificial Intelligence and Robotics | Information Security | Programming Languages and Compilers | Theory and Algorithms

Files

Download

Download Poster (408 KB)

Using Torchattacks to Improve the Robustness of Models with Adversarial Training


Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.