In this paper we describe Rabj, an engine designed to simplify collecting human input. We have used Rabj to collect over 2.3 million human judgments to augment data mining, data entry, and curation tasks at Freebase over the course of a year. We illustrate several successful applications that have used Rabj to collect human judgment. We describe how the architecture and design decisions of Rabj are affected by the constraints of <i>content agnosticity, data freshness, latency</i> and <i>visibility</i>. We present work aimed at increasing the yield and reliability of human computation efforts. Finally, we discuss empirical observations and lessons learned in the course of a year of operating the service.
The anatomy of a large-scale human computation engine