Article,

Research quality evaluation: comparing citation counts considering bibliometric database errors

, , and .
Quality & Quantity, 49 (1): 155-165 (2015)
DOI: 10.1007/s11135-013-9979-1

Abstract

When evaluating the research output of scientists, institutions or journals, different portfolios of publications are usually compared with each other. e.g., a typical problem is to select, between two scientists of interest, the one with the most cited portfolio. The total number of received citations is a very popular indicator, generally obtained by bibliometric databases. However, databases are not free from errors, which may affect the result of evaluations and comparisons; among these errors, one of the most significant is that of omitted citations. This paper presents a methodology for the pair-wise comparison of publication portfolios, which takes into account the database quality regarding omitted citations. In particular, it is defined a test for establishing if a citation count is (or not) significantly higher than one other. A statistical model for estimating the type-I error related to this test is also developed.

Tags

Users

  • @wdees

Comments and Reviews