In the big picture, I believe innovation in crowdsourcing and human computation is, first and foremost, developing new kinds of labor channels. Since many of us come from computer science or other information-related disciplines, we sometimes have a tendency to overlook this, viewing our human helpers as mere computational resources.
Value of Crowdsourcing and Human Computation
Since labor is essentially a way of creating value, we come to a key question: What are the ingredients that make this source of labor more useful? We’ve long had the ability to send a request to a co-worker—or broadcast it to 1000 co-workers—requesting assistance. There were also job tracking systems and other systems for commencing work. What changed? What are we getting out of this? However you are using crowdsourcing or human computation, my guess is that you’re benefiting from some combination of the following:
- Automated processes. When a human responds to our request, our programs can do something automatically—immediately return a result to a user, decide whether another judgment is needed in order to have a confident end result, or feed the result to the next stage of a larger process. This works thanks to platforms that let us programmatically post requests and receive the results in a structured way.
- On demand. If we wake up at 3:00 AM with a fresh idea on a problem that is due that morning, we can immediately submit a request, with some expectation of having answers by dawn. We now have shared platforms where internet users from anywhere in the world come to find something to do. Because they are distributed geographically, you can bet that somebody somewhere is awake and willing at any hour. This also means work never stops until the job is done since it’s always midday somewhere.
- Diverse perspectives. Having a large, shared pool of potential helpers distributed around the world also brings to the table a treasure trove of cultural perspectives, skills, and personal viewpoints. This helps not only with tasks that are overtly creative, but also any non-trivial application that depends on interpretation (e.g., image labeling, translation, paraphrasing, content filtering, etc.).
- Speed-cost-quality control. For some problems, including many in natural language processing, image understanding, and artificial intelligence, you have some options: ⓐ do it in-house (less accuracy, but slow, expensive), ⓑ crowdsourcing / human computation (cheaper, faster, but often less accurate), or ⓒ use a machine (fast, cheap, even less accurate). For some jobs, adding ⓑ to the mix introduced a sweet spot. In addition, it may be possible to blend ⓑ and ⓒ for even finer control.
- Economic fluidity. Crowdsourcing and human computation have enabled us to pay workers in any increment (for small tasks), any currency (i.e., national currencies, virtual currencies, public recognition, etc.), and immediately, without any preexisting employment relationship to us. All of this leads to greater fluidity in the labor market, which has the potential to benefit everybody, including our human helpers.
Through our work, we are expanding an emerging area of the labor market. Even if conditions change and expected cost of doing tasks rises (as I personally hope and expect will happen), all of the above benefits will remain.
My postal carrier gave me some unexpected insight on this a few months ago. He was lamenting the currently high unemployment rate in the US and complaining that automation has taken many jobs once held by humans. It’s an old cry. This was a USPS veteran who has been witness to the introduction of reliable handwriting recognition and automated sorting machines. My answer to him was, “We’re working on a solution!” In addition to solving our own computational problems, we are creating and expanding channels with which workers can work and exchange that value with companies and other entities that need information-oriented work do be done.
Human computation and crowdsourcing (as we use the terms today) represent a relatively new frontier. To bring the most benefit, we need to address a some challenges:
- Open access. Find ways to include more people in more places. As my advisor, Ben Bederson, described in his post, working with people in Haiti had unique benefits (access to speakers of Haitian Creole) and difficulties (internet access, communication barriers, labor norms). We need to keep working to supporting mediums (i.e., SMS, etc.) and different labor paradigms.
- Design for labor standards and fair pay. We—not policy makers—control the norm of labor standards and fair pay. As researchers and industry leaders, we are the ones who create the tools and craft the working arrangements that set the bar for those who come after us. This could mean paying more and treating workers well because it’s the right thing to do, but it definitely does not stop there. Specifics of tools imply (or even enforce) defaults and viewpoints about what is normal.
- Design for worker efficiency. If we waste workers’ time with inefficient interfaces, they will be creating less value with their time, and probably receiving less in the end.
- Quality control. Anonymity and quality control are also integral issues. If bad workers are left free to pollute the system with garbage answers, the value to customers will be less and even the good workers will get less in the end.
Related Presentations at CHI 2011
Much of this draws from conversations with Ben Bederson (my advisor) and our collaborators, Ginger Zhe Jin (economics), Siva Viswanathan (business), Echo Yiyan Liu (economics), and Shun Ye (business).
Alexander J. Quinn is a PhD student in Computer Science at the University of Maryland, working in the Human-Computer Interaction Lab (HCIL). He is advised by Ben Bederson. His dissertation research is about a method of using human computation and crowdsourcing to make complex decision-making processes more efficient and flexible. This grew out of our previous work on CrowdFlow, a framework for blending machine learning with human computation.