DIRT maintains accuracy at scale because every contributor needs to deposit tokens to write data. If the data is correct, it is freely shared. If the data is incorrect, anyone can challenge the data and earn tokens for identifying these inaccurate facts. Our protocol and platform makes it economically irrational for misinformation to persist in a data set.
B. Kessler, G. Numberg, and H. Schütze. ACL-35: Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics, page 32--38. Morristown, NJ, USA, Association for Computational Linguistics, (1997)
F. Ortega, J. Gonzalez-Barahona, and G. Robles. HICSS '08: Proceedings of the Proceedings of the 41st Annual Hawaii International Conference on System Sciences, page 304. Washington, DC, USA, IEEE Computer Society, (2008)
A. Klementiev, and D. Roth. Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, page 817--824. Sydney, Australia, Association for Computational Linguistics, (July 2006)
T. Zesch, C. Müller, and I. Gurevych. Proceedings of the Conference on Language Resources and Evaluation (LREC), electronic proceedings, Ubiquitious Knowledge Processing, Universität Darmstadt, (Mai 2008)