A. Ben-David. Intelligent Systems, IEEE see also IEEE Intelligent Systems and Their Applications, 21 (6):
68--70(2006)
Abstract
When reporting classifier accuracy, it's common to use hit ratio as a primary metric. However, hit ratio has a serious flaw. We examine the issues surrounding this flaw and explore its magnitude through an empirical experiment on three multivalued classification data sets, using two well-known machine learning models. The results demonstrate a real problem that we can't simply overlook, and we propose an alternative-Cohen's kappa. Like any other metric, it has its own shortcomings, but we believe it should be mandatory in any scientific report about classifier accuracy
%0 Journal Article
%1 citeulike:1219351
%A Ben-David, A.
%D 2006
%J Intelligent Systems, IEEE see also IEEE Intelligent Systems and Their Applications
%K classification cohen kappa performance printed read
%N 6
%P 68--70
%T What's Wrong with Hit Ratio?
%U http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4042538
%V 21
%X When reporting classifier accuracy, it's common to use hit ratio as a primary metric. However, hit ratio has a serious flaw. We examine the issues surrounding this flaw and explore its magnitude through an empirical experiment on three multivalued classification data sets, using two well-known machine learning models. The results demonstrate a real problem that we can't simply overlook, and we propose an alternative-Cohen's kappa. Like any other metric, it has its own shortcomings, but we believe it should be mandatory in any scientific report about classifier accuracy
@article{citeulike:1219351,
abstract = {When reporting classifier accuracy, it's common to use hit ratio as a primary metric. However, hit ratio has a serious flaw. We examine the issues surrounding this flaw and explore its magnitude through an empirical experiment on three multivalued classification data sets, using two well-known machine learning models. The results demonstrate a real problem that we can't simply overlook, and we propose an alternative-Cohen's kappa. Like any other metric, it has its own shortcomings, but we believe it should be mandatory in any scientific report about classifier accuracy},
added-at = {2007-04-25T14:10:44.000+0200},
author = {Ben-David, A.},
biburl = {https://www.bibsonomy.org/bibtex/2b6638eeff38d8f91b4294acc70824bd6/rabeeh},
citeulike-article-id = {1219351},
description = {CiteULike: What's Wrong with Hit Ratio?},
interhash = {089caba3eaebf43699eba802b8656626},
intrahash = {b6638eeff38d8f91b4294acc70824bd6},
journal = {Intelligent Systems, IEEE [see also IEEE Intelligent Systems and Their Applications]},
keywords = {classification cohen kappa performance printed read},
number = 6,
pages = {68--70},
priority = {0},
timestamp = {2007-04-25T14:10:44.000+0200},
title = {What's Wrong with Hit Ratio?},
url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4042538},
volume = 21,
year = 2006
}