More Accurate Tests for the Statistical Significance of Result Differences
A. Yeh. Proceedings of the 18th Conference on Computational Linguistics - Volume 2, page 947--953. Stroudsburg, PA, USA, Association for Computational Linguistics, (2000)
DOI: 10.3115/992730.992783
Abstract
Statistical significance testing of differences in values of metrics like recall, precision and balanced F-score is a necessary part of empirical natural language processing. Unfortunately, we find in a set of experiments that many commonly used tests often underestimate the significance and so are less likely to detect differences that exist between different techniques. This underestimation comes from an independence assumption that is often violated. We point out some useful tests that do not make this assumption, including computationally-intensive randomization tests.
Description
More accurate tests for the statistical significance of result differences
%0 Conference Paper
%1 Yeh:2000:MAT:992730.992783
%A Yeh, Alexander
%B Proceedings of the 18th Conference on Computational Linguistics - Volume 2
%C Stroudsburg, PA, USA
%D 2000
%I Association for Computational Linguistics
%K ecl nlp randomization significance statistical statistics test
%P 947--953
%R 10.3115/992730.992783
%T More Accurate Tests for the Statistical Significance of Result Differences
%U https://doi.org/10.3115/992730.992783
%X Statistical significance testing of differences in values of metrics like recall, precision and balanced F-score is a necessary part of empirical natural language processing. Unfortunately, we find in a set of experiments that many commonly used tests often underestimate the significance and so are less likely to detect differences that exist between different techniques. This underestimation comes from an independence assumption that is often violated. We point out some useful tests that do not make this assumption, including computationally-intensive randomization tests.
@inproceedings{Yeh:2000:MAT:992730.992783,
abstract = {Statistical significance testing of differences in values of metrics like recall, precision and balanced F-score is a necessary part of empirical natural language processing. Unfortunately, we find in a set of experiments that many commonly used tests often underestimate the significance and so are less likely to detect differences that exist between different techniques. This underestimation comes from an independence assumption that is often violated. We point out some useful tests that do not make this assumption, including computationally-intensive randomization tests.},
acmid = {992783},
added-at = {2018-09-03T13:31:02.000+0200},
address = {Stroudsburg, PA, USA},
author = {Yeh, Alexander},
biburl = {https://www.bibsonomy.org/bibtex/219d31b001f71cacb83ce5dcf6fdf8487/schwemmlein},
booktitle = {Proceedings of the 18th Conference on Computational Linguistics - Volume 2},
description = {More accurate tests for the statistical significance of result differences},
doi = {10.3115/992730.992783},
interhash = {37c7f5c98c0d90b42696101e0d19a622},
intrahash = {19d31b001f71cacb83ce5dcf6fdf8487},
keywords = {ecl nlp randomization significance statistical statistics test},
location = {Saarbr\&\#252;cken, Germany},
numpages = {7},
pages = {947--953},
publisher = {Association for Computational Linguistics},
series = {COLING '00},
timestamp = {2018-09-05T21:08:14.000+0200},
title = {More Accurate Tests for the Statistical Significance of Result Differences},
url = {https://doi.org/10.3115/992730.992783},
year = 2000
}