Date of Award

Summer 1987

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Psychology

Program/Concentration

Industrial/Organizational Psychology

Committee Director

Terry L. Dickinson

Committee Member

Glynn D. Coates

Committee Member

Robert M. McIntyre

Committee Member

William H. Silverman

Abstract

In order to enhance the quality of performance ratings, researchers have directed their efforts towards training raters to evaluate performance more accurately. The purpose of the present study was to examine two factors that may affect the efficacy of rater training for improving the accuracy of performance ratings. One factor was the type of information that was presented during training (target score information, behavioral rationale for target scores, or a combination of target score and behavioral rationale). The second factor was the mode in which information was presented during training (feedback or feedforward). In addition to assessing the unique contribution that various types of information contribute to the success of rater training programs, the present study tested two hypotheses based on generalizing the multiple cue probability learning (MCPL) literature to the task of rating performance. The first hypothesis was that rater training that incorporates target score information, or combines target score information with a behavioral rationale for the expert ratings will result in less accurate performance ratings than rater training incorporating only the behavioral rationale. The second hypothesis was that performance ratings will be more accurate when raters receive training information by means of feedforward than when control training is provided in which the training information is not presented.

A hundred and one undergraduate and graduate students served as participants in the study. These participants were randomly assigned either to one of six experimental conditions formed by crossing three levels of information type with two levels of the mode in which training information was presented, or to one of two training control conditions. Ratings were made of the videotaped performance of seven individuals conducting simulated performance evaluation interviews. The performance ratings were analyzed with correlational measures of accuracy, Cronbach's (1955) accuracy statistics, and Dickinson's (1987) extended accuracy design. The results of these analyses generally did not find the training to be effective. Support was not found for either hypothesis, although some findings did indicate that feedback was more effective than feedforward. The results are discussed in terms of differences between the MCPL paradigm and the task of performance rating. In addition, a number of possible explanations for the findings from the study are presented.

DOI

10.25777/xggd-d792

Share

COinS