Abstract
The rapid integration of artificial intelligence (AI) into various commercial products has raised concerns about the security risks posed by adversarial attacks. These attacks manipulate input data to disrupt the functioning of AI models, potentially leading to severe consequences such as self-driving car crashes, financial losses, or data breaches. We will explore neural networks, their weaknesses, and potential defenses. We will discuss adversarial attacks including data poisoning, backdoor attacks, evasion attacks, and prompt injection. Then, we will explore defense strategies such as data protection, input sanitization, and adversarial training. By understanding how adversarial attacks work and the defenses against them, we can improve the security and reliability of AI system to come.
Faculty Advisor/Mentor
Yan Lu
Document Type
Paper
Disciplines
Artificial Intelligence and Robotics | Computer Sciences | Data Science
DOI
10.25776/exm9-0a28
Upload File
wf_yes
The Vulnerabilities of Artificial Intelligence Models and Potential Defenses
The rapid integration of artificial intelligence (AI) into various commercial products has raised concerns about the security risks posed by adversarial attacks. These attacks manipulate input data to disrupt the functioning of AI models, potentially leading to severe consequences such as self-driving car crashes, financial losses, or data breaches. We will explore neural networks, their weaknesses, and potential defenses. We will discuss adversarial attacks including data poisoning, backdoor attacks, evasion attacks, and prompt injection. Then, we will explore defense strategies such as data protection, input sanitization, and adversarial training. By understanding how adversarial attacks work and the defenses against them, we can improve the security and reliability of AI system to come.