Document Type
Article
Publication Date
2025
DOI
10.7759/s44389-025-07224-y
Publication Title
Cureus Journal of Computer Science
Volume
2
Pages
es44389-025-07224-y
Abstract
Background
Learning conversations, or dialogues aimed at deepening understanding and reflection, are deeply influenced by emotions. Effective communication is influenced by emotional intelligence - the ability to recognize, understand, and manage both one’s own and others’ emotions. While advances in artificial intelligence (AI) offer new tools for emotion recognition, these technologies still struggle with accurately interpreting subtle and culturally diverse emotional expressions, sparking debate about their reliability and effectiveness. This article provides a comparative analysis of human versus AI recognition of emotions during an end-of-course reflective learning conversation.
Methods
Emotions during a structured post-conference debriefing were analyzed and coded by MorphCast (“AI data”) in real time and human researchers (“peer data”) from a video recording of the conversation in which audio was removed. Each individual in the learning conversation also self-assessed and coded their own emotions (“self data”) from the same recording. The AI and peer data were compared to the self data.
Findings
The AI dataset captured a wider range of emotions - including anger, sadness, and disgust - not detected by Self or Peer assessments, which predominantly identified happy and neutral emotions. Happiness was lower in MorphCast assessment (20.86%), higher in human assessment (59%), and highest in self-assessment (65%).
Conclusions
Human identification was more similar to self-identification of emotions during the learning conversation than AI. AI identified a broader range of emotions, especially negative ones which were largely absent in self- and peer-reports that leaned toward happy and neutral states. These differences highlight how AI, lacking contextual understanding and relying solely on facial cues, may misinterpret emotions compared to humans who integrate verbal, situational, and social information into their assessments.
Rights
© 2025 Grove et al.
This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Original Publication Citation
Grove, T. R., Lucas, A. T., Martin, M., Deckers, C. M., Mahmood, L. S., Danaher-Garcia, N., Scerbo, M. W., Kardong-Edgren, S., Palaganas, J. C., & Lucas, A. (2025). Am I as effective at identifying emotions as artificial intelligence? A comparative study of emotion recognition. Cureus Journal of Computer Science, 2, Article es44389-025-07224-y. https://doi.org/10.7759/s44389-025-07224-y
ORCID
0000-0002-0498-3222 (Scerbo)
Repository Citation
Grove, Traci R.; Lucas, Alexandra T.; Martin, MaryAnn; Deckers, Cathleen M.; Mahmood, Lulu Sherif; Danaher-Garcia, Nicole; Scerbo, Mark W.; Kardong-Edgren, Suzan; and Palaganas, Janice C., "Am I as Effective at Identifying Emotions as Artificial Intelligence? A Comparative Study of Emotional Recognition" (2025). Psychology Faculty Publications. 234.
https://digitalcommons.odu.edu/psychology_fac_pubs/234
Included in
Artificial Intelligence and Robotics Commons, Cognition and Perception Commons, Educational Assessment, Evaluation, and Research Commons, Educational Technology Commons