Article,

Reliability and validity of rubrics for assessment through writing

, and .
Assessing Writing, 15 (1): 18--39 (Apr 7, 2010)
DOI: 10.1016/j.asw.2010.01.003

Abstract

This experimental project investigated the reliability and validity of rubrics in assessment of students' written responses to a social science ” writing prompt”. The participants were asked to grade one of the two samples of writing assuming it was written by a graduate student. In fact both samples were prepared by the authors. The first sample was well written in terms of sentence structure, spelling, grammar, and punctuation; however, the author did not fully answer the question. The second sample fully answered each part of the question, but included multiple errors in structure, spelling, grammar and punctuation. In the first experiment, the first sample was assessed by participants once without a rubric and once with a rubric. In the second experiment, the second sample was assessed by participants once without a rubric and once with a rubric. The results showed that raters were significantly influenced by mechanical characteristics of students' writing rather than the content even when they used a rubric. Study results also indicated that using rubrics may not improve the reliability or validity of assessment if raters are not well trained on how to design and employ them effectively.

Tags

Users

  • @rubrics

Comments and Reviews