Document Type

Article

Publication Date

2024

DOI

10.18608/jla.2024.8323

Publication Title

Journal of Learning Analytics

Volume

11

Issue

3

Pages

142-159

Abstract

Despite a tremendous increase in the use of video for conducting research in classrooms as well as preparing and evaluating teachers, there remain notable challenges to using classroom videos at scale, including time and financial costs. Recent advances in artificial intelligence could make the process of analyzing, scoring, and cataloguing videos more efficient. These advances include natural language processing, automated speech recognition, and deep neural networks. To train artificial intelligence to accurately classify activities in classroom videos, humans must first annotate a set of videos in a consistent way. This paper describes our investigation of the degree of inter-annotator reliability regarding identification of and duration of activities among annotators with and without experience analyzing classroom videos. Validity of human annotations is crucial for research involving temporal analysis within classroom video research. The study reported here represents an important step towards applying methods developed in other fields to validate temporal analytics within learning analytics research for classifying time- and event-based activities in classroom videos.

Rights

© 2024 Journal of Learning Analytics.

This work is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License.

Original Publication Citation

Foster, J. K., Youngs, P., Aswegen, R.V., Singh, S., Watson, G. S., & Acton, S. T. (2024). Automated classification of elementary instructional activities: Analyzing the consistency of human annotations. Journal of Learning Analytics, 11(3), 142-159. https://doi.org/10.18608/jla.2024.8323

ORCID

0000-0001-7197-1654 (Watson)

Share

COinS