@mkroell

Scaling to very very large corpora for natural language disambiguation

, и . ACL '01: Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, стр. 26--33. Morristown, NJ, USA, Association for Computational Linguistics, (2001)
DOI: http://dx.doi.org/10.3115/1073012.1073017

Аннотация

The amount of readily available on-line text has reached hundreds of billions of words and continues to grow. Yet for most core natural language tasks, algorithms continue to be optimized, tested and compared after training on corpora consisting of only one million words or less. In this paper, we evaluate the performance of different learning methods on a prototypical natural language disambiguation task, confusion set disambiguation, when trained on orders of magnitude more labeled data than has previously been used. We are fortunate that for this particular application, correctly labeled training data is free. Since this will often not be the case, we examine methods for effectively exploiting very large corpora when labeled data comes at a cost.

Описание

Scaling to very very large corpora for natural language disambiguation

Линки и ресурсы

тэги

сообщество