Date of Award
Fall 12-2025
Document Type
Thesis
Degree Name
Master of Science (MS)
Department
Psychology
Program/Concentration
Psychology
Committee Director
Jeremiah Still
Committee Member
Mark Scerbo
Committee Member
Matt Henson
Abstract
Explainable Artificial Intelligence (XAI) is a key component of effective human-AI collaboration, particularly in high-stakes domains such as cybersecurity. While AI tools hold promise for mitigating threats such as SMS-based phishing (SMiShing), their real-world effectiveness may hinge not just on detection accuracy, but on whether users can make sense of the system’s outputs. As SMiShing attacks grow in both frequency and sophistication, so does the urgency of designing human-centered AI systems that support user decision-making under uncertainty. This study examined how four distinct AI explanation types - Normative (rule-based), Attributive (feature-based), Exemplar (case-based), and Recommendation-Only - influence user performance, confidence, and mental workload in a simulated SMiShing detection task against a No AI baseline. Results showed that all AI-supported conditions improved classification accuracy, with minimal differences across explanation types. Confidence was slightly higher for Exemplar explanations, while subjective mental workload, perceived usability, and willingness to adopt the system did not vary across conditions. These results indicate that AI feedback can enhance decision-making without increasing workload or degrading user experience, while demonstrating that explanation presence may matter more than its style. The findings inform the design of more effective XAI systems that can optimize user decision-making and minimize mental effort when encountering cyber security threats.
Rights
In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
DOI
10.25777/hq9j-5k20
ISBN
9798276039787
Recommended Citation
Katsarakes, Eleni A..
"Making Explanations Make Sense: XAI for SMiShing Detection"
(2025). Master of Science (MS), Thesis, Psychology, Old Dominion University, DOI: 10.25777/hq9j-5k20
https://digitalcommons.odu.edu/psychology_etds/846
ORCID
0009-0006-7556-2501