Communicating Context to the Crowd

context-matters

Crowdsourcing has traditionally consisted of short, independent microtasks that require no background. The advantage of this strategy is that work can be decoupled and assigned to independent workers. But this strategy struggles to support tasks that are increasingly complex such as writing or programming and are not independent of their context.

For instance, imagine that you ask a crowd worker to write a biography for a speaker you’ve invited to your workshop. After the work is completed you realize that the biography is written in an informal, personal tone. This is not technically wrong, it’s just not what you had in mind. You realize that you could have added a line to your task description asking for a formal/academic tone. However, there are countless nuances to a writing task that can’t all be predicted beforehand. This is what we are interested in: the context of a task meaning the collection of conditions and tacit information surrounding the task (e.g. the fact that the biography is needed for an academic workshop).

IF THIS INFORMATION CAN’T BE PRE-PACKAGED AND SENT ALONG WITH THE TASK, WHAT CAN WE DO ABOUT IT? 

context

OUR APPROACH IS TO ITERATE: do some work, communicate with the requester, and edit to fix errors. How can we support communication between the requester and crowd workers to maximize benefits while minimizing costs? If achieved, this goal would create the conditions for crowd work that is more complex and integrated than currently possible.

The main take away is to support this communication through structured microtasks. We have designed 5 different mechanisms for structured communication:

context3

We compare these methods in two studies, the first measuring the benefit of each mechanism, and the second measuring the costs to the requester (e.g. cognitive demand). We found that these mechanisms are most effective when writing is in the early phases. For text that is already high quality, the mechanisms become less effective and can even be counter-productive.

context4

We also found that the mechanisms had varying benefits depending on the quality of the initial text. Early on, when content quality is poor, the requester needs to communicate major issues. Therefore identifying the “main problem” was most effective at improving writing. Later, for average quality content, the different mechanisms have relatively similar added value.

Finally, we found that the cost of a mechanism for the requester is not always correlated with the value that it adds. For instance, for average quality paragraphs, commenting/editing was very costly but did not provide more value than simply highlighting.

 

For more, see our full paper, Communicating Context to the Crowd for Complex Writing Tasks.
Niloufar Salehi, Stanford University
Jaime Teevan, Microsoft Research
Shamsi Iqbal, Microsoft Research
Ece Kamar, Microsoft Research