Article,

Testing the Newcastle Ottawa Scale showed low reliability between individual reviewers

, , , , , , and .
Journal of Clinical Epidemiology, 66 (9): 982-993 (2013)
DOI: 10.1016/j.jclinepi.2013.03.003

Abstract

<h2>Abstract</h2><h3>Objectives</h3><p>To assess inter-rater reliability and validity of the Newcastle Ottawa Scale (NOS) used for methodological quality assessment of cohort studies included in systematic reviews.</p><h3>Study Design and Setting</h3><p>Two reviewers independently applied the NOS to 131 cohort studies included in eight meta-analyses. Inter-rater reliability was calculated using kappa (<i>κ</i>) statistics. To assess validity, within each meta-analysis, we generated a ratio of pooled estimates for each quality domain. Using a random-effects model, the ratios of odds ratios for each meta-analysis were combined to give an overall estimate of differences in effect estimates.</p><h3>Results</h3><p>Inter-rater reliability varied from substantial for <i>length of follow-up</i> (<i>κ</i> = 0.68, 95% confidence interval CI = 0.47, 0.89) to poor for <i>selection of the nonexposed cohort</i> and <i>demonstration that the outcome was not present at the outset of the study</i> (<i>κ</i> = −0.03, 95% CI = −0.06, 0.00; <i>κ</i> = −0.06, 95% CI = −0.20, 0.07). Reliability for overall score was fair (<i>κ</i> = 0.29, 95% CI = 0.10, 0.47). In general, reviewers found the tool difficult to use and the decision rules vague even with additional information provided as part of this study. We found no association between individual items or overall score and effect estimates.</p><h3>Conclusion</h3><p>Variable agreement and lack of evidence that the NOS can identify studies with biased results underscore the need for revisions and more detailed guidance for systematic reviewers using the NOS.</p>

Tags

Users

  • @fordham1

Comments and Reviews