CrowdCamp Report: Waitsourcing, approaches to low-effort crowdsourcing

Crowdsourcing is often approached as a full-attention activity, but it can also be used for applications so small that people perform them almost effortlessly. What possibilities are afforded by pursuing low-effort crowdsourcing?

Low-effort crowdsourcing is possible through a mix of low-granularity tasks, unobtrusive input methods, and an appropriate setting. Exploring the possibilities of low-effort crowdsourcing, we designed and prototyped an eclectic mix of ideas.

Browser waiting
In our first prototype, we built a browser extension that allows you to complete tasks while waiting for a page to load.

Tab shown loading, while a browser popup shows a  "chose the outlier image" task
A Chrome extension that allows users to perform simple tasks (e.g., odd image selection) while a page is loading

Getting tasks loaded and completed during the time it takes for a page to load is certainly feasible. A benefit of doing so is that the user is already disrupted in their flow by the browser load.

Emotive voting
How passive can a crowdsourcing contribution be? Many sites implement low-effort ways to respond to the quality of a online content, such as a star, ‘like’, or a thumbs up. Our next prototype takes this form of quality judgment one step further: to no-effort feedback.

Using a camera and facial recognition, we observe a user’s face as they browse funny images.

Images being voted on with smiles and frowns
The emotive voting interface ‘likes’ an image if you smile while the image is on the screen, and ‘dislikes’ if you frown.

There are social and technical challenges to a system that uses facial recognition as an input. Some people do not express amusement outwardly, and privacy concerns would likely deter users.

Secret-agent feedback
Perhaps our oddest prototype lets a user complete low-effort tasks coded into other actions.

Our system listens to the affirmative grunts that a person gives when they are listening to somebody –or pretending to. Users are show A versus B tasks, where an “uh-huh” selects one option while a “yeah” selects another.

AwesomeR Interface
The awesomeR meme interface lets a user choose the better meme via an affirmative grunt (i.e. “yeah” or “uh huh”) while he/she is talking to someone else.

Imagine Bob on the phone, listening patiently to a customer service rep while also completing tasks. The idea is silly, but the method of spoken input quickly become natural and thoughtless.

Binary tweeting

Can a person write with a low-bandwidth input? We provide a choice-based composer where users are offered a multiple choice interface for their next word.

Sentence generation with choice-based typing. The program prompts a user to choose one of two words that are likely to come after the previous words, allowing them to generate a whole sentence by low-effort interaction.
Sentence generation with choice-based typing. The program prompts a user to choose one of two words that are likely to come after the previous words, allowing them to generate a whole sentence by low-effort interaction.

By plugging into Twitter for its corpus, the phrases our prototype constructs are realistically colloquial and current. There are endless sentiments that can be expressed on Twitter, but much of what we do say, about one-fifth, is nearly identical to past messages.

As we continue to pursue low-effort crowdsourcing, we are thinking about how experiments such as those outlined here can be used to capture productivity in fleeting moments. Let us know your ideas in the comments.

Find the binary tweeter online, and find our other prototypes at GitHub.

Jeff Bigham, Carnegie Mellon University, USA
Kotaro Hara, University of Maryland, College Park, USA
Peter Organisciak, University of Illinois, Urbana-Champaign, USA
Rajan Vaish, University of California, Santa Cruz, USA
Haoqi Zhang, Northwestern University, USA

About the author

Peter Organisciak

4th year PhD student in information science at the University of Illinois, Urbana-Champaign.

View all posts


  • One of the potential cases where really low-effort tasks might be interesting, which was brought up by Jeff at CrowdCamp, was getting responses from simple everyday physical actions, such as opening a door. Turning the doorknob left for No, and right for Yes (or some similar mapping).

    Looking at the work you’ve done here, it looks like merging physical actions with voice input could result in “computational environments” where people complete microtasks as they go about their daily tasks. Maybe in the environment-embedded case, these tasks even help control the environment itself (using questions such as “Is it too hot in here?” with a yes/no answer given by a doorknob use).

    Are there other useful problems that can be solved by creating these types of environments, besides classic data set tasks such as the ones you explore here?

  • Thanks for bringing that up Walter. The door idea is exciting (and potentially terrifying).

    You’re right about the fact that this approach to crowdsourcing would open up new possibilities around a user’s environmental context. There are situations, like a bus stop or elevator, where people pull out their phone seemingly to just stare at it… to give them something to do for a minuscule moment. I imagine that more passive devices like Google Glass and smart watches will increase this type of activity.

    We still need to think about those forms of environmental task cases, but the real-world crowdsourcing example that comes to mind is GasBuddy. Users contribute information about gas prices around them, mainly because it is trivial to do so, but rewards them both indirectly – by supporting a system that gives them value – and directly – with authority points and raffles.