Date of Award

Spring 1987

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Psychology

Program/Concentration

Industrial/Organizational Psychology

Committee Director

Terry L. Dickinson

Committee Member

Glynn D. Coates

Committee Member

Robert M. McIntyre

Committee Member

Michael Secunda

Abstract

The primary focus of the present study was to examine systematically the influence of rater training, scale format, and rating justification on the quality (i.e., convergent and discriminant validity, halo, leniency) of ratings exhibited by three rater sources (i.e., self, peer, observer). Ninety-one undergraduate students participated in a videotaped role play exercise and returned at a later time to take part in a three-hour rating session. These individuals provided self- and peer ratings. Forty-five advanced undergraduate students participated in a similar rating session and provided observer ratings. Convergent validity, discriminant validity, and halo were tested with the multitrait-multimethod analysis of variance (MTMM ANOVA) approach. To assess the influence of training, scale format, and rating justification on the quality of performance ratings, each experimental condition was treated as a MTMM design and separate ANOVAs were calculated. A 2 (Training) x 2 (Format) x 2 (Justification) x 3 (Rater Sources) x 4 (Dimensions) ANOVA was computed to test the effects of the experimental conditions on the leniency of performance ratings across rater sources.

Mixed support was found for the ability of these variables to influence the quality of performance ratings given by the three rater sources. Specifically, training and the use of the behavioral checklist increased discriminant validity and reduced halo, while raters who had to justify their performance ratings exhibited lower discriminant validity than raters who did not have to justify their ratings. With respect to leniency, the level of ratings across the three rater sources was affected by the variables of interest. Training and the use of the behavioral checklist helped to reduce leniency in self-ratings in those situations when raters had to justify their performance ratings.

These results lend support for the use of training and the behavioral checklist to improve the overall quality of performance ratings given by different rater sources. However, future research should assess what specific training program content is needed to improve convergent validity when the behavioral checklist is used. In addition, research must be conducted to identify which rater sources provide high-quality ratings on which performance dimensions if a multiple-method approach to the assessment of job performance is desired

Rights

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

DOI

10.25777/qn3n-qm73

Share

COinS