Author Information

Muhammad RabiuFollow

Abstract/Description/Artist Statement

Identifying software vulnerabilities in source code is essential for building secure and reliable systems. Existing automated analysis tools can detect potential weaknesses, but they often produce false positives and require significant expert effort to validate findings. Recent advances in large language models (LLMs) have created new opportunities for assisting code analysis due to their ability to reason about program behavior and generate natural-language explanations of potential issues. In this work, we explore how reasoning produced by LLMs during vulnerability analysis may provide useful signals for improving automated assessment of software security. Our ongoing study investigates approaches for representing relationships between source code structures and model-generated reasoning in a structured form that can be analyzed using machine learning techniques. Preliminary observations suggest that patterns in generated explanations may contain informative cues for distinguishing meaningful vulnerability indicators from spurious detections. This project aims to better understand how reasoning-aware representations can support more reliable vulnerability analysis in software systems.

Presenting Author Name/s

Muhammad Rabiu

Faculty Advisor/Mentor

Mahmoud Nazzal

Faculty Advisor/Mentor Email

mnazzal@odu.edu

Faculty Advisor/Mentor Department

Computer Science

College/School Affiliation

College of Sciences

Student Level Group

Graduate/Professional

Presentation Type

Poster

Share

COinS
 

Exploring Large Language Model Reasoning for Software Vulnerability Detection

Identifying software vulnerabilities in source code is essential for building secure and reliable systems. Existing automated analysis tools can detect potential weaknesses, but they often produce false positives and require significant expert effort to validate findings. Recent advances in large language models (LLMs) have created new opportunities for assisting code analysis due to their ability to reason about program behavior and generate natural-language explanations of potential issues. In this work, we explore how reasoning produced by LLMs during vulnerability analysis may provide useful signals for improving automated assessment of software security. Our ongoing study investigates approaches for representing relationships between source code structures and model-generated reasoning in a structured form that can be analyzed using machine learning techniques. Preliminary observations suggest that patterns in generated explanations may contain informative cues for distinguishing meaningful vulnerability indicators from spurious detections. This project aims to better understand how reasoning-aware representations can support more reliable vulnerability analysis in software systems.