Speeding up the completion and reducing the cost of crowdsourced work are two common goals of those using crowdsource labour. Making task more appealing is essential to meeting these goals. But, the number of people completing a task in a given time tells only half the story.
Understanding why crowd workers pick one task over another is essential to making maximum use of the available workforce in carrying out your task. In the same way that sunscreen lotion flies off the shelves in the summer, but is much less popular in winter, external factors may influence the behaviour and availability of the worker population at any given time.
To really understand what makes a task appealing, any metric must include a measure of the population. To factor in the number of people who might have chosen to complete a task, we propose to measure the conversion rate of tasks.
In our paper, we describe three conversion rate metrics all based on a simple ratio: the number of workers who complete a task, divided by the number who previewed the task.
For the number of workers completing a task to increase, only the number of available workers must increase (e.g. at the weekend). By including a measure of the available workforce, conversion rate metrics vary only when worker intent changes: when workers are primed to complete the task.
As pre-task interactions, including previews, are not made available to requesters using Mechanical Turk we developed a tool, turkmill, and offer this to aid others in gathering this information.
To demonstrate the usefulness of conversion rate metrics in understanding worker behaviour we carried out two experiments:
- How does different presentation and structure of the task description affect the conversion rate for the task?
- What impact does intrinsic motivation play on the conversion rate for tasks?
- Do workers prefer non-profit organisations or commercial institutions as requesters?
As expected, we find that failing to disclose the length and difficulty of a task hinders worker uptake of the task. Unexpectedly, we also find that workers prefer tasks that are not branded by universities or non-profit requesters.
In summary, measuring the conversion rate for tasks can inform requesters about which features make a task attractive. It also improves the reproducibility of experiments by providing a measure of self-selection biases among workers.
For more, see our full paper, Crowdsourcing a HIT: Measuring Workers’ Pre-task Interaction on Microtask Markets at HCOMP 2013.
Jason T. Jacques, University of St Andrews
Per Ola Kristensson, University of St Andrews