— “So you want to start a company? That’s cool! Do you have a website? What’s the product? How about a logo?”
— “Oh, so you are crowdsourcing the website? The logo as well… and wait… are you going to crowdsource the product too!?!”
— “… are you sure this is a good idea?”
Crowdsourcing contests — events to solicit solutions to problems via an open-call format for prizes — are generating a lot of excitement as a mechanism for organizations to accomplish tasks. Crowd-power is harnessed today to design everything from t-shirts to software to artificial intelligence algorithms. Not surprisingly, crowdsourcing doesn’t always work. What is surprising is how and where crowdsourcing can fail: our results suggest that crowdsourcing contests can fail to identify expert-workers in a crowd and even induce mediocre performance. Intrigued? Read below for more details, or you can always catch the paper here as well.
“There is this misconception that you can sprinkle crowd wisdom on something and things will turn out for the best. That’s not true. It’s not magic.” – Thomas Malone, 2009.
We aim to quantitatively characterize the pros and cons of crowdsourcing. When is crowdsourcing worth it? What kinds of tasks should be crowdsourced? Under what circumstances? How can an organization find the best person for a job and incentivize her to do her best?
Our paper develops a simple decision-making framework to address these questions. A first section provides a taxonomy of tasks according to their suitability for crowdsourcing. The utility that an organization derives from a task depends on its nature and the skills and incentives of the workers completing it. For example, tasks may be selective, like software component design, or integrative, such as idea-generation. We provide a simple mathematization to provide recommendations as to which tasks should be crowdsourced.
The second half of the paper focuses not on the tasks to be completed, but on the workers completing them. A key advantage of the crowdsourcing model is that task-selection is decentralized and is essentially delegated to the worker actually doing the work. Tasks are self- selected. With this agency, workers can naturally choose tasks that they are good at and enjoy. In contrast, task assignment by a manager requires that he have detailed information about each workers’ skills and preferences. We address the natural question: how useful is crowdsourcing for skill identification? Can it help optimally match workers to tasks?
Under a model where only one task is to be completed and several workers are available we find crowdsourcing works well when:
- managers are uncertain about the skills of workers.
- workers are diverse in their task skills.
- workers have low default effort levels.
However, with many similar hardworking workers, crowdsourcing can do worse than manager-assignment, even if the manager has little knowledge of player skills.
With multiple, diverse tasks to be completed by many workers:
- crowdsourcing contests can perform as well as optimal assignment of workers to tasks for non-specialized tasks (i.e. if enough skilled players are available).
- crowdsourcing contests can provide very low utility even with many strong players and just one weak player.
- crowdsourcing contests can perform badly for highly specialized tasks. Instead of pulling out highly-skilled workers from a crowd, crowdsourcing could lead to mediocre performance by everyone.
Improvement in communication technologies and information flow has changed the fundamental organization of both our social and commercial networks. Crowdsourced work is the natural next step to a decentralized and global labor force. While the paper here provides coarse guidelines, a much finer understanding is required: what are the target-tasks for crowds? and who are the target-crowds? How can larger jobs can be divided into decoupled tasks that can then be crowdsourced? Is it possible that future generations around the world will make their living by being full-time crowd-workers? Much further theoretical and field-work is required before these questions can be satisfactorily answered.
For more, see our full paper, To Crowdsource or not to Crowdsource?.