Transforming the Face of Labor

In the big picture, I believe innovation in crowdsourcing and human computation is, first and foremost, developing new kinds of labor channels. Since many of us come from computer science or other information-related disciplines, we sometimes have a tendency to overlook this, viewing our human helpers as mere computational resources.

Value of Crowdsourcing and Human Computation

Since labor is essentially a way of creating value, we come to a key question: What are the ingredients that make this source of labor more useful? We’ve long had the ability to send a request to a co-worker—or broadcast it to 1000 co-workers—requesting assistance. There were also job tracking systems and other systems for commencing work. What changed? What are we getting out of this? However you are using crowdsourcing or human computation, my guess is that you’re benefiting from some combination of the following:

  1. Automated processes. When a human responds to our request, our programs can do something automatically—immediately return a result to a user, decide whether another judgment is needed in order to have a confident end result, or feed the result to the next stage of a larger process. This works thanks to platforms that let us programmatically post requests and receive the results in a structured way.
  2. On demand. If we wake up at 3:00 AM with a fresh idea on a problem that is due that morning, we can immediately submit a request, with some expectation of having answers by dawn. We now have shared platforms where internet users from anywhere in the world come to find something to do. Because they are distributed geographically, you can bet that somebody somewhere is awake and willing at any hour. This also means work never stops until the job is done since it’s always midday somewhere.
  3. Diverse perspectives. Having a large, shared pool of potential helpers distributed around the world also brings to the table a treasure trove of cultural perspectives, skills, and personal viewpoints. This helps not only with tasks that are overtly creative, but also any non-trivial application that depends on interpretation (e.g., image labeling, translation, paraphrasing, content filtering, etc.).
  4. Speed-cost-quality control. For some problems, including many in natural language processing, image understanding, and artificial intelligence, you have some options: ⓐ do it in-house (less accuracy, but slow, expensive), ⓑ crowdsourcing / human computation (cheaper, faster, but often less accurate), or ⓒ use a machine (fast, cheap, even less accurate). For some jobs, adding ⓑ to the mix introduced a sweet spot. In addition, it may be possible to blend ⓑ and ⓒ for even finer control.
  5. Economic fluidity. Crowdsourcing and human computation have enabled us to pay workers in any increment (for small tasks), any currency (i.e., national currencies, virtual currencies, public recognition, etc.), and immediately, without any preexisting employment relationship to us. All of this leads to greater fluidity in the labor market, which has the potential to benefit everybody, including our human helpers.

Transforming Labor

Through our work, we are expanding an emerging area of the labor market. Even if conditions change and expected cost of doing tasks rises (as I personally hope and expect will happen), all of the above benefits will remain.

My postal carrier gave me some unexpected insight on this a few months ago. He was lamenting the currently high unemployment rate in the US and complaining that automation has taken many jobs once held by humans. It’s an old cry. This was a USPS veteran who has been witness to the introduction of reliable handwriting recognition and automated sorting machines. My answer to him was, “We’re working on a solution!” In addition to solving our own computational problems, we are creating and expanding channels with which workers can work and exchange that value with companies and other entities that need information-oriented work do be done.

Next Steps

Human computation and crowdsourcing (as we use the terms today) represent a relatively new frontier. To bring the most benefit, we need to address a some challenges:

  1. Open access. Find ways to include more people in more places. As my advisor, Ben Bederson, described in his post, working with people in Haiti had unique benefits (access to speakers of Haitian Creole) and difficulties (internet access, communication barriers, labor norms). We need to keep working to supporting mediums (i.e., SMS, etc.) and different labor paradigms.
  2. Design for labor standards and fair pay. We—not policy makers—control the norm of labor standards and fair pay. As researchers and industry leaders, we are the ones who create the tools and craft the working arrangements that set the bar for those who come after us. This could mean paying more and treating workers well because it’s the right thing to do, but it definitely does not stop there. Specifics of tools imply (or even enforce) defaults and viewpoints about what is normal.
  3. Design for worker efficiency. If we waste workers’ time with inefficient interfaces, they will be creating less value with their time, and probably receiving less in the end.
  4. Quality control. Anonymity and quality control are also integral issues. If bad workers are left free to pollute the system with garbage answers, the value to customers will be less and even the good workers will get less in the end.

Related Presentations at CHI 2011

Web Workers Unite! Addressing Challenges of Online Laborers– Mon 11:00 AM
Human Computation: A Survey and Taxonomy of a Growing Field – Tue 2:00 PM

Acknowledgments

Much of this draws from conversations with Ben Bederson (my advisor) and our collaborators, Ginger Zhe Jin (economics), Siva Viswanathan (business), Echo Yiyan Liu (economics), and Shun Ye (business).

Background

Alexander J. Quinn is a PhD student in Computer Science at the University of Maryland, working in the Human-Computer Interaction Lab (HCIL). He is advised by Ben Bederson. His dissertation research is about a method of using human computation and crowdsourcing to make complex decision-making processes more efficient and flexible. This grew out of our previous work on CrowdFlow, a framework for blending machine learning with human computation.

Unpacking Crowdsourcing during Crises

For those of us who study crises, the term crowdsourcing just keeps coming up. Though Ushahidi is the most referenced example, other platforms and virtual volunteer efforts (CrisisMappers, CrisisCamps, Humanity Road) use the term crowdsourcing, in one way or another, to describe their work. Examining the range of behaviors that these efforts encompass, crowdsourcing becomes a nebulous term, a catchall descriptor that, if taken at face value, can obscure the diverse socio-technical phenomena underneath.

Volunteerism during disasters and crises is not a new phenomenon. Researchers of sociology and disaster have long recognized that during the aftermath of an event, people will spontaneously converge on the site to offer assistance [1, 2]. In recent years, the spontaneous volunteer has met the connective power of social media, opening up new opportunities for contributions of many varieties from people all over the world.

Crowd-driven and crowd-leveraging activities during crisis events take a myriad of different forms, including uploading data from those affected or responding on the “ground” during an event, using the crowd to process (filter, verify, map, relay) data, and outsourcing tool development and debugging. During recent disasters (earthquakes and tsunamis in Haiti, New Zealand and Japan, floods and wildfires in the U.S. and Australia, political protests and violence in the Middle East) groups of volunteers used the Ushahidi platform to collect reports from the ground, Skype chat rooms to verify reports and coordinate media monitoring strategies, CrisisCommons wikis to consolidate information, and Twitter to coordinate the movement of responders and supplies [3]. Often, these efforts were set in intentional motion by an individual or group. For instance, a person or organization can initiate a Google Map or Ushahidi crowdmap and encourage others to participate in populating the map. In other cases, spontaneous volunteers self-organized into ad-hoc groups to help process and route information during the immediate aftermath of tragic events.

Considering the diverse and varied behaviors that constitute “crowdsourcing” during disasters, it’s possible that the term has outgrown its usefulness as a one-off descriptor. My research looks to unpack crowdsourcing during crisis events by uncovering the socio-technical interactions taking place between those affected, the spontaneous virtual volunteers, response agencies, and their ever-evolving tool sets.

References

[1] Fritz, C. E. & Mathewson, J. H. Convergence Behavior in Disasters: A Problem in Social Control, Committee on Disaster Studies, National Academy of Sciences, National Research Council, Washington DC, 1957. 21.

[2] Kendra, J. M. & Wachtendorf, T. Reconsidering Convergence and Converger: Legitimacy in Response to the World Trade Center Disaster, Terrorism and Disaster: New Threats, New Ideas: Research in Social Problems and Public Policy, 11,
(2003), 97-122.

[3] Starbird, K. & Palen, L. (2011). “Voluntweeters:” Self-Organizing by Digital Volunteers in Times of Crisis. Proceedings of the ACM 2011 Conference on Computer Human Interaction (CHI 2011), Vancouver, BC, Canada

Challenges of HComp Around the World

Ben Bederson
University of Maryland
Human-Computer Interaction Lab

In our work looking at natural language translation, we identified a gap in the range of solutions available between purely automated solutions and purely mechanical ones provided by individual or collaborative bilingual humans.  For many language tasks, automated solutions are not good enough, and human solutions are not fast enough. So we have been building “MonoTrans”, a solution that combines automated solutions *with* human participation. The key is that to scale throughput, we require humans who can read/write only the source or target language (but not both). MonoTrans has people on each side of the language divide do simple tasks such as voting, identifying errors and paraphrasing portions of a sentence. Breaking down complex problems into bits that most people can do is the key to scaling up with the crowd. For complex problem domains, this is an interesting challenge, and one we think would have benefit for a wide range of domains.

However, even when the problems have been broken down into small bits that require skills that many people have, finding participants with the skills needed to participate remains a challenge.  One recent activity of ours has been participation in a competition attempting to translate text messages from the Haitian 2010 earthquake.  Our approach depended on participation by non-English speaking people that could read/write Haitian Creole.  Given the lack of awareness of micro-task services in Haiti, we hired individuals to participate through People In Need, a Haitian non-profit conveniently run by my brother.  However, even with a strong social connection, a local bilingual staff, and the ability to pay relatively high wages, we still ran into incredible challenges.  Internet connectivity was spotty.  Communication with our bilingual coordinators in Haiti was difficult, and helping the participants understand this rather unusual task required a lot of hand-holding.  Perhaps most strangely, they was an apparent cultural disdain for hourly labor – which obviously is incompatible with this kind of work.

Generalizing human computation to non-English speaking locales in regions with poor connectivity is important, especially if we want to address a more complete range of problems; figuring out how to do it remains difficult. I believe we need to design for the broadest possible participation if we want to really leverage human computation solutions to address society’s problems.

Position paper:

Participation in Human Computation

Collaborators on MonoTrans:

Chang Hu, Yakov Konrad, Philip Resnik

Related presentations at CHI 2011:

MonoTrans2: A New Human Computation System to Support Monolingual Translation – Tue 11:00 AM

Web Workers Unite! Addressing Challenges of Online Laborers – Mon 11:00 AM

Human Computation: A Survey and Taxonomy of a Growing Field – Tue 2:00 PM

Background:

Ben Bederson is an Associate Professor of Computer Science, a previous director of the Human-Computer Interaction Lab at the University of Maryland, and co-founder of the International Children’s Digital Library (ICDL).  Since 2002, the ICDL has made a diverse collection of exemplary children’s books freely available online, bringing together a large community of teachers, children, librarians, and volunteers from every corner of the world.  The site currently features about 4,500 books in 54 languages, and receives about 100,000 unique visitors per month, a significant proportion of which come from schools in developing nations.  The desire to make the entire collection accessible to the entire community of users (who speak countless different languages) was our primary motivation to find ways to engage the crowd to help translate the books.

Is crowdsourcing changing the who, what, where, and how of creative work?

By Mira Dontcheva (Adobe Systems) and Elizabeth Gerber (Northwestern University)

The Web’s ability to connect individuals has fundamentally changed the way creative work is done. Today, websites like 99designs and CrowdSpring allow businesses to crowdsource professional creative solutions. Clients publicly solicit creative content, such as logos, ads, or websites, and pick the best design from hundreds of alternatives created by designers from all over the world. In a more playful setting, platforms like Worth1000 and LayerTennis encourage contributors to compete with each other and collaboratively create new artwork. And artistic projects like The Johny Cash Project and Eric Whitacre’s Virtual Choir combine hundreds or thousands of contributions into an art form that appears greater than the sum of its parts.

What makes this collaborative creative work successful and can this process scale beyond a few examples? Is success due to an impressive leader shepherding the creative work as Kurt Luther claims in his post? Or is it more about the iterative feedback mentioned by Steven Dow and Scott Klemmer? What are the characteristics of a crowdsourcing environment that fosters creativity and empowers its contributors to create something new?

For the last fifty years, organizational researchers concerned with fostering creativity have studied individual and group creative processes and have found that environments that are supportive of creativity offer:

  • task autonomy and freedom, which allow workers to have a sense of ownership over their work and ideas,
  • intellectually challenging work,
  • supervisory encouragement including setting clear goals and frequent and open interactions between a supervisor and his/her team,
  • organizational encouragement including encouraging workers to take risks, evaluating new ideas fairly without too much criticism, and offering rewards and recognition for creativity,
  • and work group supports through team members with diverse backgrounds, openness to ideas, and a shared commitment to a project [1] .

As HCI designers, we have an opportunity to apply organizational theory to crowdsourcing platforms, while remembering that motivation and work behavior are closely linked. Projects such as The Johny Cash Project suggest that a shared commitment to a project and recognition for creativity motivate participation.

In our own research we have been asking workers on Amazon’s Mechanical Turk to engage in creative work.  Perhaps not surprisingly, as it was not built with creative tasks in mind, the Mechanical Turk platform does not support or encourage the creative process.  We look forward to attending the workshop where we can share out design recommends for crowdsourcing platforms that foster creative projects  and discuss how crowdsourcing is changing the who, what, where and how of creative work.

Mira Dontcheva is a senior research scientist at Adobe Systems where she does research on  search and sensemaking interfaces, end-user programming, and most recently creativity. As a Northwestern Design Professor, Elizabeth Gerber researches the role of technology in creativity and innovation.

References:
1. Amabile, T., Conti, R., Coon, H., Lazenby, J., and Herron, M. Assessing the work environment for creativity. The Academy of Management Journal 39, 5 (1996), 1154–1184.

Workshop Paper:
Crowdsourcing and Creativity

Cultural Differences on Crowdsourcing Marketplaces

by Gary Hsieh (Michigan State University)

Pooja is a 24 years old student who lives in Kottayam, India. Lisa is a 53 year old retiree, living thousands of miles away in Kissimmee, Florida. Despite their differences in age, gender, socioeconomic status, and cultural background, they do have one thing in common – they both just finished the same HIT on Amazon’s Mechanical Turk.

Crowdsourcing marketplaces such as Mechanical Turk are attracting more and more geographically diverse workers. Existing demographic studies on Mechanical Turk has shown that in 2008, workers from the US make up of 76% of total workers on MTurk. By 2010, the percentage of workers from US has dropped to 47%. On the other hand, workers from India, for example, have risen from 8% to 34%. This increase in geographic diversity provides two key benefits. More worker diversity leads to high variance in approach, skill, and knowledge to complete the tasks, which can result in significantly higher quality of work. Having geographic-diversity also mean that crowdsourcing marketplaces can have active workers at all hours of the day, reducing the time it may take for work to get completed.

However, the increase in workers from around the world is posing a new set of challenges for requesters of work. Much prior work show that there is a significant effect of cultural backgrounds on individual’s thoughts, values, and behaviors. The increasingly diverse cultural backgrounds on crowdsourcing marketplaces may interact with incentive types, amounts, and task types to impact workers’ task performance, engagement, and selection. This affects corporate-requesters who are trying to maximize economic efficiencies from using crowdsourcing services, and also researcher-requesters who are trying to control for their experiments conducted over these services.

With my students and my collaborator Vaughn Hester at Crowdflower, we are in the process of surveying MTurk workers to gain a better understanding of workers’ socioeconomic status and cultural background. Our current survey explores the differences across workers from different countries. In addition, this survey also studies how cultural backgrounds can affect workers’ selection of and performance on crowdsourcing tasks. Ultimately, we hope to utilize our findings to help design better interactions and interfaces to support and leverage workers’ cultural differences.

crowdsourcing general computation, one application at a time

If you can leverage a crowd to do anything, what would it be?

My collaborators and I are studying ways to harness the crowd to do more by coupling the wealth of (computer) algorithmic understanding with our on-going discoveries of how the crowd works.  I tend to think of this as crowdsourcing 2.0, or crowd programming 102: now that we know a crowd exists and that we have programming access into it, what algorithms/interfaces/crowd-interfaces do we use to control the crowd for solving complex tasks?

I believe strongly that this will is quickly becoming a hot area, because there is so much we don’t know about how to organize the crowd around more complex tasks. My position paper with Eric Horvitz, Rob Miller, and David Parkes sets out an agenda identifying three subareas of study in this space, and recent works like Turkomatic and CrowdForge are building the tools that will help us explore this space (as well as exploring it in interesting ways themselves). Instead of rehashing the arguments in our paper and these works, let me argue a slightly different point:

We should build super novel crowd-powered applications that require an understanding of how to harness the collective power of the crowd to solve larger, more complex problems.

I believe crowdsourcing 2.0 applications will help move us forward as an academic community, and provide tremendous value to end users in the meanwhile. In this vein, I am particularly excited about my recent, on-going work with Edith Law on collaborative planning, where we are exploring how to leverage a crowd to come up with a plan for solving a problem, in the context of
(a) breaking down high level search queries into actionable steps as a new approach to web search, and
(b) collaborative event planning, either with family and friends, or crowdsourced out [*].

Since Edith and I love food, we recently planned a potluck using our tool (or rather, the potluck participants did), where people specify dishes they can bring, add to a wish list, make requests, fulfill wishes and requests, and so on, to collaborative plan a menu. Here is a picture of most of the entrees (appetizers/salads/desserts were in a different room, and yes, we ate in courses):

Entrees at our crowdsourced potluck (3/25/11)

These and other crowdsourcing 2.0 applications will draw on innovations in task decomposition (how should we break up and combine the work), crowd control of program flow (have the crowd tell us what needs work and where to search) and human program synthesis (having humans come up with the steps that make up a plan). But while we went into these applications thinking algorithmic paradigm first, we find more and more that designing for how people can best think/work/decompose play an equally important role in enabling such applications. How these pieces fit together is something we should study academically, but let’s have the applications drive us (and feed us… I had a great meal).

Haoqi Zhang is a 4th year PhD candidate at Harvard University. Many of the ideas expressed here are from collaborations and conversations with Eric Horvitz, Edith Law, Rob Miller, and David Parkes.

[*] Please be patient with us if you are looking forward to seeing the first crowdsourced wedding. If you’d like to have your wedding crowdsourced, please contact me immediately.

Humanizing Human Computation

The Internet is packed with crowds of people building, interpreting, synthesizing, and establishing a hodgepodge of interesting and valuable artifacts. Whether the crowds are creating something as grand as an encyclopedia of all world knowledge or as mundane as a discussion on good restaurants in Pittsburgh, PA, the human capability to interact socially and to create an ad hoc whole out of many individual accomplishments is staggering.  However, current efforts in human computation largely do not take advantage of these amazing human capabilities. They focus on single workers and rigid functions. The common computational tasks suggested to newcomers in Amazon’s Mechanical Turk include among others tagging images and classifying web content. In these tasks a worker is given some input data (source images) and performs some ‘human’ function on it to produce useful output (tags) that the job requester has to incorporate into their final product.

While perhaps expedient, such tasks do not leverage some key, unique capabilities that separate human workers from input-output machines. Without training or delay humans can think creatively, socially interact, and make highly nuanced judgments. The next generation of human computation and crowdsourcing ought to leverage more of the ‘human’-ness in workers. Yet, how do we incorporate these uniquely human characteristics like creativity and social interaction into crowdsourcing and encode them into markets?  Efforts like CrowdForge suggest there may in fact be an answer to this question by demonstrating just how powerful crowdworkers can be in highly complex, generative tasks like writing news articles. Similarly, I’ve seen success in allowing Turkers to self organize to complete a task in a collaborative text editor.

Check out this YouTube video: http://www.youtube.com/watch?v=VEGhXNcyTRg
MTurkers collaborating to translate in an Etherpad shared text editor

Collaboration might be one way to get at the core of the ‘human’ element of human computation. Workers in real-world organizations are well adapted for teamwork, dividing and directing individual expertise where it is needed and providing social motivation. This might be extended into crowdsourcing. Could a future market enable projects rather than tasks that require a team of people who curate their own final product, with milestones and payment based both on individual achievement and overall progress? Might workers take on extemporized or formal roles, for example having experts in editing proofread the work of those more skilled in content generation? Can social interaction methods such as work teams provide encouragement as they already have in Wikipedia and also foster higher quality end products? On the other hand, what are the costs of collaboration in rapid-fire microtasks? Are there certain types of tasks for which collaboration is well suited?

By pushing the boundaries of both the types of tasks we use in human computation and the expectations we hold for workers, we can enable a host of new possibilities in crowdsourcing. The melding of social interaction with microtasks is worthy of much more consideration.

Jeff Rzeszotarski (rez-oh-tar-ski) is a first year PhD student in human-computer interaction at Carnegie Mellon University. His research primarily concerns synthesis and interpretation in online content generation communities and extending crowdsourcing techniques into the social realm.

Crowdsourcing Contextual User Information

by Brian Tidball, PhD Student (ID StudioLab, Delft University of Technology)

The creative activities common in crowdsourcing have promising links to the creative activities used in generative and participatory user research.

As Pieter Jan Stappers and I wrote in our position paper, the ID-StudioLab has been working with and developing design tools and methods that engage users and elicit user-driven information for the design process. These participatory and generative techniques gather rich multilayered information about users and their lives: building empathy, informing, and inspiring the design process. Unfortunately these techniques are resource (time, money, expertise) intensive, consequently impeding their use in practice. We see crowdsourcing as an opportunity to more readily access rich information from and about users.

MT Sustainable
By asking ‘Turkers’ to submit personal photos of sustainable living, we gained new insights into the role of sustainability in peoples lives.

Our initial explorations with crowdsourcing explore this idea of crowdsourcing user insights. Preliminary findings highlight the ability to collect rich and personal information, emphasize the roll of intrinsic motivations (interest in the topic, supporting others, etc.), and the ability to not only elicit a single focused response from users, but also engage them in creative dialog (see our position paper for a little more info).

From these experiences I developed a framework to depict the key elements of the crowdsourcing process as they relate to accessing user insights.

Designers CS Framework

The blue elements identify the items that the designer (solicitor) can influence in order to access a segment of the crowd, and motivate them to provide a desired response. Especially the cyclical elements of feedback and discussion appeal to a view of crowdsourcing that goes beyond the limitations of merely outsourcing. This framework provides a foundation to further study both the process and the results of crowdsourcing user information, as we continue to build our understanding of crowdsourcing as a tool for HCI.

Leading the Crowd

by Kurt Luther (Georgia Tech)

Who tells the crowd what to do? In the mid-2000s, when online collaboration was just beginning to attract mainstream attention, common explanations included phrases like “self-organization” and “the invisible hand.” These ideas, as Steven Weber has noted, served mainly as placeholders for more detailed, nuanced theories that had yet to be developed [6]. Fortunately, the last half-decade has filled many of these gaps with a wealth of empirical research looking at how online collaboration really works.

One of the most compelling findings from this literature is the central importance of leadership. Rather than self-organizing, or being guided by an invisible hand, the most successful crowds are led by competent, communicative, charismatic individuals [2,4,5]. For example, Linus Torvalds started Linux, and Jimmy Wales co-founded Wikipedia. The similar histories of these projects suggest a more general lesson about the close coupling between success and leadership. With both Wikipedia and Linux, the collaboration began when the project founder brought some compelling ideas to a community and asked for help. As the project gained popularity, its success attracted new members. Fans wanted to get involved. Thousands of people sought to contribute–but how could they coordinate their efforts?

(from “The Wisdom of the Chaperones” by Chris Wilson, Slate, Feb. 22, 2008)

Part of the answer, as with traditional organizations, includes new leadership roles. For a while, the project founder may lead alone, acting as a “benevolent dictator.” But eventually, most dictators crowdsource leadership, too. They step back, decentralizing their power into an increasingly stratified hierarchy of authority. As Wikipedia has grown to be the world’s largest encyclopedia, Wales has delegated most day-to-day responsibilities to hundreds of administrators, bureaucrats, stewards, and other sub-leaders [1]. As Linux exploded in popularity, Torvalds appointed lieutenants and maintainers to assist him [6]. When authority isn’t decentralized among the crowd, however, leaders can become overburdened. Amy Bruckman and I have studied hundreds of crowdsourced movie productions and found that because leaders lack technological support to be anything other than benevolent dictators, they struggle mightily, and most fail to complete their movies [2,3].

This last point is a potent reminder: all leadership is hard, but leading online collaborations brings special challenges. As technologists and researchers, we can help alleviate these challenges. At Georgia Tech, we are building Pipeline, a movie crowdsourcing platform meant to ease the burden on leaders, but also help us understand which leadership styles work best. Of course, Pipeline is just the tip of the iceberg–many experiments, studies, and software designs can help us understand this new type of creative collaboration. We’re all excited about the wisdom of crowds, but let us not forget the leaders of crowds.

Kurt Luther is a fifth-year Ph.D. candidate in social computing at the Georgia Institute of Technology. His dissertation research explores the role of leadership in online creative collaboration.

References

  1. Andrea Forte, Vanesa Larco, and Amy Bruckman, “Decentralization in Wikipedia Governance,” Journal of Management Information Systems 26, no. 1 (Summer): 49-72.
  2. Kurt Luther, Kelly Caine, Kevin Ziegler, and Amy Bruckman, “Why It Works (When It Works): Success Factors in Online Creative Collaboration,” in Proceedings of GROUP 2010 (New York, NY, USA: ACM, 2010), 1–10.
  3. Kurt Luther and Amy Bruckman, “Leadership in Online Creative Collaboration,” in Proceedings of CSCW 2008 (San Diego, CA, USA: ACM, 2008), 343-352.
  4. Siobhán O’Mahony and Fabrizio Ferraro, “The Emergence of Governance in an Open Source Community,” Academy of Management Journal 50, no. 5 (October 2007): 1079-1106.
  5. Joseph M. Reagle, “Do As I Do: Authorial Leadership in Wikipedia,” in Proceedings of WikiSym 2007 (Montreal, Quebec, Canada: ACM, 2007), 143-156.
  6. Steven Weber, The Success of Open Source (Harvard University Press, 2004).

Workshop Paper
Fast, Accurate, and Brilliant: Realizing the Potential of Crowdsourcing and Human Computation

Capitalizing on Mobile Moments

When mobile, the time period that people have to engage in an activity is generally short — on the order of minutes and sometimes as short as a few seconds. Unlike the non-mobile situation such as at the office or at home, these time periods that we characterized as mobile moments are fleeting.  Tasks performed at such times need to be facilitated by a mobile interface that permits users to get to the core of their activity as quickly and easily as possible with minimal overhead.

Mobile moments are also potential opportunities to harness human resources for computation especially when people have free time on their hands.  The smartphone, being always available and on, enables people to use such free times on activities that are pleasant and entertaining.  If the activities, as a side effect, are beneficial to others, mobile moments can be leveraged for the greater good.  Thus, empowered by their smartphones, crowdsourcing efforts can tap such users in their mobile moments to perform human computation tasks. These tasks could be location-based but need not — they should simply be performed in those serendipitous moments.

Our work on FishMarket, a mobile-based prediction market game, was born out of an interest in crowdsourcing amongst enterprise workers during their mobile moments.  The game enables these workers to use their mobile devices, anytime and anyplace, to share specialized knowledge quickly and efficiently.  The game’s user experience evolved through several iterations as we attempted to make the game concepts accessible and engaging, and game play easy and quick, to encourage people to play the game during their brief mobile moments.

The space and the types of possible human computation tasks for mobile moments are largely unmapped;  we are interested in exploring these possibilities.  Also, we are particularly interested in the design aspects (e.g., UI, game, social) as well as attributes of the crowdsourcing tools. Examples of attributes include how the tools channel experts’ desire to solve problems, how the tools tap into people’s willingness to share, and how the tools use the crowd to sort through the solutions to find the best one.

Alison Lee and Richard Hankins are Principal Research Scientists at Nokia Research Center in Palo Alto.  Alison is developing mobile services that enhance mobile work, mobile collaboration, and mobile recreation.  Richard’s research focus is on future mobile devices and systems. They both hold a Ph.D. in Computer Science — Alison from the University of Toronto and Richard from the University of Michigan.