MACE (Multi-Annotator Competence Estimation) is an implementation of an item-response model that let's you evaluate redundant annotations of categorical data. It provides competence estimates of the individual annotators and the most likely answer to each item.
If we have 10 annotators answer a question, and five answer with 'yes' and five with 'no' (a surprisingly frequent event), we would normaly have to flip a coin to decide what the right answer is. If we knew, however, that one of the people who answered 'yes' is an expert on the question, while one of the others just alwas selects 'no', we would take this information into account to weight their answers. MACE does exactly that. It tries to find out which annotators are more trustworthy and upweighs their answers. All you need to provide is a CSV file with one item per line.
In tests, MACE's trust estimates correlated highly wth the annotators' true competence, and it achieved accuracies of over 0.9 on several test sets. MACE can take annotated items into account, if they are available. This helps to guide the training and improves accuracy.
Bilddateien so bearbeiten, dass künstliche Intelligenz sie auswerten kann - das ist eine der Aufgaben von sogenannten Klick-Arbeitern. Die meisten von ihnen stammen einer Studie zufolge aus dem krisengeplagten Venezuela.
Im Zuge einer Studie für die Hans-Böckler Stiftung haben WissenschaftlerInnen sich angesehen, auf welche Weise sechs Crowdsourcing-Plattformen mit Unternehmenssitz in Deutschland CrowdworkerInnen in arbeits- und unternehmensbezogenen Themenbereichen Partizipation ermöglichen. Ihr Fazit: Partizipation findet auf Plattformen statt – wenngleich diese ausbaufähig ist.
Find out more at: https://18.re-publica.com/node/24751 When humans imagined robots and computers in the workplace, they have envisioned them as servants and ...
Keine Antwort, kein Geld, keine Unterstützung. Nicht nur bei Unfällen bekommen die Fahrradkuriere von Deliveroo und Foodora die Risiken des Freelancer-Modells oft hart zu spüren. Jetzt regt sich Widerstand - auch in Berlin und Wien.
Regulation is a key word when the Nordic countries discuss the platform economy. The challenge is to secure good working conditions for the individual, a level playing field for businesses and tax revenues for the state. New technology is good, but the platforms must be developed in line with the labour market as a whole.
M. Keeler. Proceedings of the 19th International Conference on Conceptual Structures (ICCS 2011), volume 6828 of Lecture Notes in Computer Science, page 131-144. Springer, (2011)
A. Brew, D. Greene, and P. Cunningham. Proceedings of the 19th European Conference on Artificial Intelligence, volume 215 of Frontiers in Artificial Intelligence and Applications, page 145--150. Amsterdam, The Netherlands, The Netherlands, IOS Press, (2010)
T. Finin, W. Murnane, A. Karandikar, N. Keller, J. Martineau, and M. Dredze. Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, page 80--88. Stroudsburg, PA, USA, Association for Computational Linguistics, (2010)
A. Marcus, E. Wu, S. Madden, and R. Miller. Proceedings of the 5th Biennial Conference on Innovative Data Systems Research, page 211--214. CIDR, (January 2011)
A. Kulkarni, M. Can, and B. Hartmann. Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems, page 2053--2058. New York, NY, USA, ACM, (2011)
V. Ambati, S. Vogel, and J. Carbonell. Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, page 1191--1194. New York, NY, USA, ACM, (2012)