New Paper – Better Crowdcoding: Strategies for Promoting Accuracy in Crowdsourced Content Analysis

I have a new paper out in Communication Methods and Measuresproduced in collaboration with Ceren Budak and Daniel Sude. The work is the first in a series of studies that I’ve conducted with faculty at the University of Michigan’s School of Information examining strategies for improving content analysis conducted using crowdsourced workers (e.g., MTurk). The publisher has provided a limited number of free eprints. If you are interested, you can download a copy here.

Abstract:

In this work, we evaluate different instruction strategies to improve the quality of crowdcoding for the concept of civility. We test the effectiveness of training, codebooks, and their combination through 2 × 2 experiments conducted on two different populations – students and Amazon Mechanical Turk workers. In addition, we perform simulations to evaluate the trade-off between cost and performance associated with different instructional strategies and the number of human coders. We find that training improves crowdcoding quality, while codebooks do not. We further show that relying on several human coders and applying majority rule to their assessments significantly improves performance.

Comments are closed.