Integrated crowdsourcing helps volunteer-based communities get work done

by Pao Siangliulue, Joel Chan, Steven P. Dow, and Krzysztof Z. Gajos

We are working on crowdsourcing techniques to support volunteer communities (rather than to get work done with paid workers). In these communities, it can be infeasible or undesirable to recruit external paid workers. For example, nonprofits may lack the funds to pay external workers. In other communities, such as crowdsourced innovation platforms, issues of confidentiality or intellectual property rights may make it difficult to involve external workers. Further, volunteers in the community may possess desirable levels of motivation and expertise that may be valuable for the crowdsourcing tasks. In these scenarios, it may be desirable to leverage the volunteers themselves for crowdsourcing tasks, rather than external workers.

A key challenge to leveraging the volunteers themselves is that in any reasonably complex activity (like collaborative ideation) there are exciting tasks to be done and there is other work that is equally important, but less interesting. In a paper to be presented at UIST’16, we demonstrated the integrated crowdsourcing approach that seamlessly integrates the potentially tedious secondary task (e.g., analyzing semantic relationships among ideas) with the more intrinsically-motivated primary task (e.g., idea generation). When the secondary task was seamlessly integrated with the primary task, our participants did as good a job on it as crowds hired for money. They also reported the same levels of enjoyment as when working just on the fun task of idea generation.

Our IdeaHound system embodies the integrated crowdsourcing approach in support of online collaborative ideation communities.  This is how it works:  The main element of the IdeaHound interface (below) is a large white board. Participants make use of this affordance to spatially arrange their own ideas and the ideas of others because such arrangement of inspirational material is helpful to them in their own ideation process.

The main interface of the IdeaHound system.

Their spatial arrangements also serve as an input to a machine learning algorithm (t-SNE) to construct a model of semantic relationships among ideas:

The computational approach behind IdeaHound

In several talks last fall, we referred to this approach as “organic” crowdsourcing, but the term proved confusing and contentious.

Several past projects (e.g., Crowdy and the User-Powered ASL Dictionary) embedded work tasks into casual learning activities. Our work shows how to generalize the concept to a domain where integration is more difficult.

You can learn more by attending Pao‘s presentation at UIST next week in Tokyo or you can read the paper:

Pao Siangliulue, Joel Chan, Steven P. Dow, and Krzysztof Z. Gajos. IdeaHound: improving large-scale collaborative ideation with crowd-powered real-time semantic modeling. In Proceedings of UIST ’16, 2016.

Crowdfunding: A New Way to Involve the Community in Entrepreneurship

Consider the last thing you bought on Amazon. Do you remember the company that made the product? Did you speak with the designer? In our CSCW 2014 paper, Understanding the Role of Community in Crowdfunding, we present the first qualitative study of how crowdfunding provides a new way for entrepreneurs to involve the public in their design process.

Screen Shot 2014-02-09 at 3.28.33 PM
An example crowdfunding project page.

We interviewed 47 crowdfunding entrepreneurs using Kickstarter, Indiegogo, and Rockethub to understand:

  • What is the work of crowdfunding?
  • What role does community play in crowdfunding work?
  • What current technologies support crowdfunding work, and how can they be improved?

Scholars studying entrepreneurship find that less than 30% of traditional entrepreneurs maintain direct or indirect ties with investors or customers. This stands in contrast to crowdfunding entrepreneurs who report maintaining regular and direct contact with their financial supporters during and after their campaign. This includes responding to questions, seeking feedback on prototypes, and posting weekly progress updates.

For example, one book designer described performing live video updates with his supporters on how he did page layout. Another product designer making a lightweight snowshoe had his supporters vote on what color to make the shoe straps.

Overall, we identified five types of crowdfunding work and the role of community in each:
Screen Shot 2014-02-09 at 2.21.21 PM

Perhaps the most exciting type of crowdfunding work in reciprocating resources where experienced crowdfunders not only donate funds to other projects, but also give advice to novices. For instance, a crowdfunding entrepreneur who ran two successful campaigns created his own Pinterest board (see example below) where he posts tips and tricks on how to run a campaign. While another successful crowdfunder says he receives weekly emails from people asking for feedback on their project page.

Screen Shot 2014-02-09 at 3.17.38 PM

While there exist many tools for online collaboration and feedback, such as Amazon Mechanical Turk and oDesk, few crowdfunders use them or know of their existence. This suggests design opportunities to create more crowdfunder-friendly support tools to help them perform their work. We are currently designing tools to help crowdfunders seek feedback online from crowd workers and better understand and leverage their social networks for publicity.

For more information on the role of community in crowdfunding, you can download our full paper here.

Julie Hui, Northwestern University
Michael Greenberg, Northwestern University
Elizabeth Gerber, Northwestern University

 

 

Voyant: Generating Structured Feedback on Visual Designs Using a Crowd of Non-Experts

Crowdsourcing offers an emerging opportunity for users to receive rapid feedback on their designs. A critical challenge for generating feedback via crowdsourcing is to identify what type of feedback is desirable to the user, yet can be generated by non-experts. We created Voyant, a system that leverages a non-expert crowd to generate perception-oriented feedback from a selected audience as part of the design workflow.

The system generates five types of feedback: (i) Elements are the individual elements that can be seen in a design. (ii) First Notice refers to the visual order in which elements are first noticed in the design. (iii) Impressions are the perceptions formed in one’s mind upon first viewing the design. (iv) Goals refer to how well the design is perceived to meet its communicative goals. (v) Guidelines refer to how well the design is perceived to meet known guidelines in the domain.

Voyant decomposes feedback generation into a description and interpretation phase, inspired by how a critique is taught in design education. In each phase, the tasks focus a worker’s attention on specific aspects of a design rather than soliciting holistic evaluations to improve outcomes. The system submits these tasks to an online labor market (Amazon Mechanical Turk). Each type of feedback typically requires a few hours to generate and costs a few US dollars.

Our evaluation shows that users were able to leverage the feedback generated by Voyant to develop insight, and discover previously unknown problems with their designs. For example, the Impressions feedback generated by Voyant on a user’s poster (see the video above). The user intended it to be perceived as Shakespeare, but was surprised to learn of an unintended interpretation (see “dog” in word cloud).

To use Voyant, the user imports a design image and configures the crowd demographics. Once generated, the feedback can be utilized to help iterate toward an effective solution.

Try it: http://www.crowdfeedback.me

 

For more, see our full paper, Voyant: Generating Structured Feedback on Visual Designs Using a Crowd of Non-Experts.
Anbang Xu, University of Illinois at Urbana-Champaign
Shih-Wen Huang, University of Illinois at Urbana-Champaign
Brian P. Bailey, University of Illinois at Urbana-Champaign

Predicting How Online Creative Collaborations Form and Succeed

What leads to a successful creative collaboration? Be it music, movies, or multimedia… collaborative online communities are springing up around all sorts of shared artistic interests. Even more exciting, these communities offer new opportunities to study creativity and collaboration through their social dynamics.

A screenshot of the FAWM.ORG website for 2013, listing more than 800 collaborations.
A screenshot of the FAWM.ORG songwriting community website in 2013. More than 8% of all compositions are collaborations.

We examine collaboration in February Album Writing Month (FAWM), an online music community. The annual goal for each member is to write 14 new songs in the 28 days of February. In previous work we found that FAWM newcomers who collab in their first year are more likely to (1) write more songs, (2) reach the 14-song goal, (3) give feedback to others, and (4) donate money to support the site. Given the individual and group benefits, we sought to better understand what factors lead to successful collabs.

By combining traditional member surveys with a computational analysis over four years of archival log data, we were able to extend several existing social theories about collaboration in nuanced and interesting ways. A few of our main findings are:

  1. Collabs form out of shared interests but different backgrounds. Theory predicts that people work with others who share their interests. But we found that, for example, a heavy-metal songwriter is less likely to collab with another metalhead than, say, a jazz pianist (who enjoys head-banging on occasion).
  2. Collabs are associated with small status differences. Existing theory also predicts that people tend to work with others of the same social status. In our study, members teamed up with folks of slightly different status more often than those of identical status. (There are several explanations, ranging from newcomer socialization to hero-worship.)
  3. A balanced effort is most enjoyable for both participants. The “social loafing” literature suggests that people are disappointed by collabs when their partner is a slacker. However, we found that the slackers themselves were disappointed, too.

To top it all off, the novel path-based regression model we use is significantly better than other standard techniques for predicting new collabs that will form (see the graphs below). This has exciting implications for recommender systems and other socio-technological tools to help improve collaboration in online creative communities.

ROC and precision-recall curves comparing our path-based regression approach to standard link prediction methods. Our model provides insights into social theory and yields better predictions, as well.
ROC and precision-recall curves comparing our path-based regression model to standard link prediction methods. Our model provides both accurate predictions and insights into social theory.

For more, please see our full paper Let’s Get Together: The Formation and Success of Online Creative Collaborations.

Burr Settles, CMU Machine Learning Department (now with Duolingo)
Steven Dow, CMU Human Computer Interaction Institute

CrowdCamp Report: DesignOracle: Exploring a Creative Design Space with a Crowd of Non-Experts

For an unconstrained design problem like writing a fiction story or designing a poster, can a crowd of non-experts be shepherded to generate a diverse collection of high quality ideas? Can the crowd also generate the actual stories or posters? With the goal of building crowd-powered creativity support tools, a group of us, Paul André, Robin N. Brewer, Krzysztof Gajos, Yotam Gingold, Kurt Luther, Pao Siangliulue, and Kathleen Tuite, set out to answer these questions. We based our approach on a technique described by Keith Johnstone in his book Impro: Improvisation and the Theatre for extracting creativity from people who believe themselves to be uncreative. An uncreative person is told that a story is already prepared and he or she has merely to guess it via yes or no questions. Unbeknown to the guesser, there is no story; guesses are met with a yes or no response, essentially randomly. For example:

  1. Is it about CSCW? Yes
  2. Is it about CrowdCamp? No
  3. Is it about a bitter rivalry? Yes

As questioning proceeds, a consistent story is revealed, entirely due to the guesser generating and then externalizing an internally consistent mental model of a story (or poster, etc.) that justifies the given answers.

To evaluate the potential of this “StoryOracle” approach, we ran a series of experiments on Amazon Mechanical Turk:

  1. We extracted dozens of surprising and creative stories and poster designs using the technique.
  2. We explored the design space in a directed manner by generating variations on well-liked stories. Every question and yes or no answer provides a possible branch-point. To branch, we selected an important “pivot” question and showed all questions and answers up to the pivot, followed by the pivot question and the same or the opposite answer, to a set of new participants with instructions to continue guessing the story. For example: A story and branches
  3. We converted question-based stories into “normal” stories. For example:

    Jack is a scientist who has dedicated his life to discovering how to generate energy using fusion power because he believes it would benefit humanity. Eventually he discovers how to do it and tells his wife, Jane, the good news. Jane is not aware that this is a secret and talk about it with Carl (to whom she was in love before meeting Jack). Carl tries to steal the secret from Jack but Jane fought with him and is able to impede him.
    However, Jack discovers that Jane told Carl about the secret and they have a fight because of it. Eventually, Carl, who is angry with both of them, decides to kill Jack and Jane. He manages to kill the couple but when he is about to steal Jack’s secret, the power fusion discovery is released to the public through the internet.
    Thus, all the world is able to produce energy using Jack’s discovery and eventually his dream of providing a better quality of life to everyone comes true.

  4. We evaluated stories’ quality.
  5. We devised domain-specific prompts for questioners, such as the setting, theme, and characters in a story.

Taken together, in the two days of CrowdCamp we managed to build the foundation for a crowd-powered tool to explore a creative design space, in an undirected or directed manner, and generate a variety of high quality artifacts.

Paul André, Carnegie Mellon University
Robin N. Brewer, University of Maryland, Baltimore County
Krzysztof Gajos, Harvard University
Yotam Gingold, George Mason University
Kurt Luther, Carnegie Mellon University
Pao Siangliulue, Harvard University
Kathleen Tuite, University of Washington