Date of Award
Doctor of Philosophy (PhD)
STEM and Professional Studies
Instructional Design & Technology
The purpose of this dissertation was to investigate the effects of prompting students to monitor their use of learning strategies and comprehension while completing self-paced, work-related training in a computer-based learning environment. Study participants included 94 enlisted military volunteers, randomly assigned to one of three groups in the spring of 2012. Changes in strategy use and comprehension were evaluated within and between groups receiving either immediate, delayed or no prompts using multiple methods of measurement, both during and after training. Prompts asked participants to rate their level of agreement to statements regarding their strategy use and comprehension of lesson content.
Dependent variables included declarative knowledge and self-regulation. Declarative knowledge was measured using multiple end-of-lesson tests and a comprehensive end-of-course test. Self-regulation of strategy use was measured using a post-treatment self-report instrument and strategy use scores derived from an evaluation of learner notes. Independent variables included prompts to self-monitor performance; prior knowledge was used as a covariate in all analyses. Multivariate analysis of covariance was used to investigate the effects of the prompts on the combination of self-regulation and comprehension scores at the end of training. Mixed model repeated measures analysis of covariance was used to investigate changes in self-regulation and strategy use during training. Analysis of results revealed no statistically significant effects of the prompting treatments on combined scores of self-regulation and comprehension by the end of the treatment between groups. Furthermore, there were no significant effects of the prompts on strategy use or comprehension over time between groups.
Findings from this study suggest the addition of prompts in computer-based learning events may not be effective for all learners or learning tasks. In contrast to similar experiments with college students, the prompts failed to influence participant strategy use and learning. Although groups receiving prompts invested more time in training, the additional time did not lead to improved overall strategy use or comprehension scores in comparison to the group that did not receive prompts. By the end of training, average comprehension scores among groups was equivalent and, on average, below passing (80%). The lack of effect on strategy use may have been a result of participants' low prior knowledge, proficiency with learning strategies, task complexity and the value participants assigned to the learning task.
Findings from this study expand the existing body of knowledge regarding the self-regulation of learning in computer-based learning environments, particularly with regard to the population of working adults, whose self-regulation of learning in the workplace has not been extensively investigated. Additionally, this study provides an example of how to employ multiple measures of self-regulation to more fully describe self-regulatory processes in computer-based learning environments, an approach researchers investigating self-regulation have called for.
Coburn, Christopher J..
"Prompting Self-Monitoring of Learning in Self-Paced Computer Based Training: The Effect on Self-Regulation and Learning"
(2012). Doctor of Philosophy (PhD), Dissertation, STEM and Professional Studies, Old Dominion University, DOI: 10.25777/kvj7-6s77