Author Information

Huong QuachFollow

Description/Abstract/Artist Statement

In recent years, the development and deployment of computer vision models have become widespread, with applications ranging from autonomous vehicles to security systems. Among these, object detection algorithms like YOLO are particularly significant due to their real-time performance and accuracy in identifying and localizing objects within an image. However, the robustness of these models is increasingly challenged by adversarial attacks, which are deliberate manipulations designed to deceive the model's predictions.

In this paper, I present an approach to advancing the deception capabilities of adversarial patches, specifically targeting YOLO-based person detectors. The objective is to design and implement shaped adversarial patches that can be applied directly over a person, effectively lowering the confidence scores of the detector and ultimately fooling the system into failing to recognize the person altogether.

This work builds on existing research in adversarial machine learning, where the creation of adversarial patches has been shown to significantly undermine the reliability of computer vision models. By exploring new patch shapes and configurations, I aim to enhance the effectiveness of these patches, making them more adaptable and capable of bypassing detection algorithms. The results of this research could have profound implications for the security and reliability of AI systems in real-world environments, where adversarial attacks pose a growing threat.

Presenting Author Name/s

Huong Quach

Faculty Advisor/Mentor

Dr. Gladden

Faculty Advisor/Mentor Department

Cybersecurity

College Affiliation

College of Sciences

Presentation Type

Poster

Disciplines

Electrical and Computer Engineering

Upload File

wf_yes

Share

COinS
 

31 - Shaped Adversarial Patches

In recent years, the development and deployment of computer vision models have become widespread, with applications ranging from autonomous vehicles to security systems. Among these, object detection algorithms like YOLO are particularly significant due to their real-time performance and accuracy in identifying and localizing objects within an image. However, the robustness of these models is increasingly challenged by adversarial attacks, which are deliberate manipulations designed to deceive the model's predictions.

In this paper, I present an approach to advancing the deception capabilities of adversarial patches, specifically targeting YOLO-based person detectors. The objective is to design and implement shaped adversarial patches that can be applied directly over a person, effectively lowering the confidence scores of the detector and ultimately fooling the system into failing to recognize the person altogether.

This work builds on existing research in adversarial machine learning, where the creation of adversarial patches has been shown to significantly undermine the reliability of computer vision models. By exploring new patch shapes and configurations, I aim to enhance the effectiveness of these patches, making them more adaptable and capable of bypassing detection algorithms. The results of this research could have profound implications for the security and reliability of AI systems in real-world environments, where adversarial attacks pose a growing threat.