Abstract
Despite the rising popularity of the practice of competency modeling,
research on competency modeling has lagged behind. This study begins
to close this practice–science gap through 3 studies (1 lab study and
2 field studies), which employ generalizability analysis to shed light on
(a) the quality of inferences made in competency modeling and (b) the
effects of incorporating elements of traditional job analysis into competency
modeling to raise the quality of competency inferences. Study 1
showed that competency modeling resulted in poor interrater reliability
and poor between-job discriminant validity amongst inexperienced
raters. In contrast, Study 2 suggested that the quality of competency
inferences was higher among a variety of job experts in a real organization.
Finally, Study 3 showed that blending competency modeling efforts
and task-related information increased both interrater reliability among
SMEs and their ability to discriminate among jobs. In general, this set
of results highlights that the inferences made in competency modeling
should not be taken for granted, and that practitioners can improve competency
modeling efforts by incorporating some of the methodological
rigor inherent in job analysis.
Users
Please
log in to take part in the discussion (add own reviews or comments).