Document Type
Article
Publication Date
2025
DOI
10.3390/electronics14163265
Publication Title
Electronics
Volume
14
Issue
16
Pages
3265
Abstract
Understanding human perceptual strategies in high-stakes environments, such as crime scene investigations, is essential for developing cognitive models that reflect expert decision-making. This study presents an immersive experimental framework that utilizes virtual reality (VR) and eye-tracking technologies to capture and analyze visual attention during simulated forensic tasks. A 360° panoramic crime scene, constructed using the Nikon KeyMission 360 camera, was integrated into a VR system with HTC Vive and Tobii Pro eye-tracking components. A total of 46 undergraduate students aged 19 to 24–23, from the National University of Singapore in Singapore and 23 from the Central Police University in Taiwan—participated in the study, generating over 2.6 million gaze samples (IRB No. 23-095-B). The collected eye-tracking data were analyzed using statistical summarization, temporal alignment techniques (Earth Mover’s Distance and Needleman-Wunsch algorithms), and machine learning models, including K-means clustering, random forest regression, and support vector machines (SVMs). Clustering achieved a classification accuracy of 78.26%, revealing distinct visual behavior patterns across participant groups. Proficiency prediction models reached optimal performance with a random forest regression (𝑅2 = 0.7034), highlighting scan-path variability and fixation regularity as key predictive features. These findings demonstrate that eye-tracking metrics—particularly sequence-alignment-based features—can effectively capture differences linked to both experiential training and cultural context. Beyond its immediate forensic relevance, the study contributes a structured methodology for encoding visual attention strategies into analyzable formats, offering valuable insights for cognitive modeling, training systems, and human-centered design in future perceptual intelligence applications. Furthermore, our work advances the development of autonomous vehicles by modeling how humans visually interpret complex and potentially hazardous environments. By examining expert and novice gaze patterns during simulated forensic investigations, we provide insights that can inform the design of autonomous systems required to make rapid, safety-critical decisions in similarly unstructured settings. The extraction of human-like visual attention strategies not only enhances scene understanding, anomaly detection, and risk assessment in autonomous driving scenarios, but also supports accelerated learning of response patterns for rare, dangerous, or otherwise exceptional conditions—enabling autonomous driving systems to better anticipate and manage unexpected real-world challenges.
Rights
© 2025 by the authors.
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution 4.0 International (CC BY 4.0) License.
Data Availability
Article states: "The raw data supporting the conclusions of this article will be made available by the authors on request."
Original Publication Citation
Yang, W.-C., Shih, C.-H., Jiang, J., Pallas Enguita, S., & Chen, C.-H. (2025). Analyzing visual attention in virtual crime scene investigations using eye-tracking and VR: Insights for cognitive modeling. Electronics, 14(16), 1-23, Article 3265. https://doi.org/10.3390/electronics14163265
Repository Citation
Yang, Wen-Chao; Shih, Chih-Hung; Jiang, Jiajun; Enguita, Sergio Pallas; and Chen, Chung-Hao, "Analyzing Visual Attention in Virtual Crime Scene Investigations Using Eye-Tracking and VR: Insights for Cognitive Modeling" (2025). Electrical & Computer Engineering Faculty Publications. 550.
https://digitalcommons.odu.edu/ece_fac_pubs/550
ORCID
0000-0003-2958-5666 (Jiang), 0009-0009-5048-3964 (Enguita), 0000-0002-4860-9187 (Chen)
Included in
Artificial Intelligence and Robotics Commons, Cognition and Perception Commons, Emergency and Disaster Management Commons, Forensic Science and Technology Commons