Crowdfunding: A New Way to Involve the Community in Entrepreneurship

Consider the last thing you bought on Amazon. Do you remember the company that made the product? Did you speak with the designer? In our CSCW 2014 paper, Understanding the Role of Community in Crowdfunding, we present the first qualitative study of how crowdfunding provides a new way for entrepreneurs to involve the public in their design process.

Screen Shot 2014-02-09 at 3.28.33 PM
An example crowdfunding project page.

We interviewed 47 crowdfunding entrepreneurs using Kickstarter, Indiegogo, and Rockethub to understand:

  • What is the work of crowdfunding?
  • What role does community play in crowdfunding work?
  • What current technologies support crowdfunding work, and how can they be improved?

Scholars studying entrepreneurship find that less than 30% of traditional entrepreneurs maintain direct or indirect ties with investors or customers. This stands in contrast to crowdfunding entrepreneurs who report maintaining regular and direct contact with their financial supporters during and after their campaign. This includes responding to questions, seeking feedback on prototypes, and posting weekly progress updates.

For example, one book designer described performing live video updates with his supporters on how he did page layout. Another product designer making a lightweight snowshoe had his supporters vote on what color to make the shoe straps.

Overall, we identified five types of crowdfunding work and the role of community in each:
Screen Shot 2014-02-09 at 2.21.21 PM

Perhaps the most exciting type of crowdfunding work in reciprocating resources where experienced crowdfunders not only donate funds to other projects, but also give advice to novices. For instance, a crowdfunding entrepreneur who ran two successful campaigns created his own Pinterest board (see example below) where he posts tips and tricks on how to run a campaign. While another successful crowdfunder says he receives weekly emails from people asking for feedback on their project page.

Screen Shot 2014-02-09 at 3.17.38 PM

While there exist many tools for online collaboration and feedback, such as Amazon Mechanical Turk and oDesk, few crowdfunders use them or know of their existence. This suggests design opportunities to create more crowdfunder-friendly support tools to help them perform their work. We are currently designing tools to help crowdfunders seek feedback online from crowd workers and better understand and leverage their social networks for publicity.

For more information on the role of community in crowdfunding, you can download our full paper here.

Julie Hui, Northwestern University
Michael Greenberg, Northwestern University
Elizabeth Gerber, Northwestern University

 

 

Voyant: Generating Structured Feedback on Visual Designs Using a Crowd of Non-Experts

Crowdsourcing offers an emerging opportunity for users to receive rapid feedback on their designs. A critical challenge for generating feedback via crowdsourcing is to identify what type of feedback is desirable to the user, yet can be generated by non-experts. We created Voyant, a system that leverages a non-expert crowd to generate perception-oriented feedback from a selected audience as part of the design workflow.

The system generates five types of feedback: (i) Elements are the individual elements that can be seen in a design. (ii) First Notice refers to the visual order in which elements are first noticed in the design. (iii) Impressions are the perceptions formed in one’s mind upon first viewing the design. (iv) Goals refer to how well the design is perceived to meet its communicative goals. (v) Guidelines refer to how well the design is perceived to meet known guidelines in the domain.

Voyant decomposes feedback generation into a description and interpretation phase, inspired by how a critique is taught in design education. In each phase, the tasks focus a worker’s attention on specific aspects of a design rather than soliciting holistic evaluations to improve outcomes. The system submits these tasks to an online labor market (Amazon Mechanical Turk). Each type of feedback typically requires a few hours to generate and costs a few US dollars.

Our evaluation shows that users were able to leverage the feedback generated by Voyant to develop insight, and discover previously unknown problems with their designs. For example, the Impressions feedback generated by Voyant on a user’s poster (see the video above). The user intended it to be perceived as Shakespeare, but was surprised to learn of an unintended interpretation (see “dog” in word cloud).

To use Voyant, the user imports a design image and configures the crowd demographics. Once generated, the feedback can be utilized to help iterate toward an effective solution.

Try it: http://www.crowdfeedback.me

 

For more, see our full paper, Voyant: Generating Structured Feedback on Visual Designs Using a Crowd of Non-Experts.
Anbang Xu, University of Illinois at Urbana-Champaign
Shih-Wen Huang, University of Illinois at Urbana-Champaign
Brian P. Bailey, University of Illinois at Urbana-Champaign

CrowdCamp Report: HelloCrowd, The “Hello World!” of human computation

The first program a new computer programmer writes in any new programming language is the “Hello world!” program – a single line of code that prints “Hello world!” to the screen.

We ask, by analogy, what should be the first “program” a new user of crowdsourcing or human computation writes?  “HelloCrowd!” is our answer.

Hello World task
The simplest possible “human computation program”

Crowdsourcing and human computation are becoming ever more popular tools for answering questions, collecting data, and providing human judgment.  At the same time, there is a disconnect between interest and ability, where potential new users of these powerful tools don’t know how to get started.  Not everyone wants to take a graduate course in crowdsourcing just to get their first results. To fix this, we set out to build an interactive tutorial that could teach the fundamentals of crowdsourcing.

After creating an account, HelloCrowd tutorial users will get their feet wet by posting three simple tasks to the crowd platform of their choice. In addition to the “Hello, World” task above, we chose two common crowdsourcing tasks: image labeling and information retrieval from the web.  In the first task, workers provide a label for an image of a fruit, and in the second, workers must find the phone number for a restaurant. These tasks can be reused and posted to any crowd platform you like; we provide simple instructions for some common platforms.  The interactive tutorial will auto-generate the task urls for each tutorial user and for each platform.

Mmm, crowdsourcing is delicious
Mmm, crowdsourcing is delicious

More than just another tutorial on “how to post tasks to MTurk”, our goal with Hello Crowd is to teach fundamental concepts.  After posting tasks, new crowdsourcers will learn how to interpret their results (and get even better results next time).  For example: what concepts might the new crowdsourcer learn from the results for the “hello world” task or for the business phone number task?  Phone numbers are simple, right?  What about “867-5309” vs “555.867.5309” vs “+1 (555) 867 5309”?  Our goal is to get new users of these tools up to speed about  how to get good results: form validation (or not), redundancy, task instructions, etc.

In addition to teaching new crowdsourcers how to crowdsource, our tutorial system will be collecting a longitudinal, cross-platform dataset of crowd responses.  Each person who completes the tutorial will have “their” set of worker responses to the standard tasks, and these are all added together into a public dataset that will be available for future research on timing, speed, accuracy and cost.

We’re very proud of HelloCrowd, and hope you’ll consider giving our tutorial a try.

Christian M. Adriano, Donald Bren School, University of California, Irvine
Juho Kim, MIT CSAIL
Anand Kulkarni, MobileWorks
Andy Schriner, University of Cincinnati
Paul Zachary, Department of Political Science, University of California, San Diego

Reactive and Multi-platform Crowdsourcing

An essential aspect for building effective crowdsourcing computations is the ability of controlling the crowd. This means dynamically adapting the behaviour of the crowdsourcing systems in response to:

  • the quantity and timing of completed tasks
  • the quality of responses and task results
  • the profile, availability and reliability of performers.

At the purpose of controlling the crowd, we bring together various ingredients:

  • crowdsourcing: we define an abstract model of crowdsourcing activities in terms of elementary task types (such as: labelling, liking, commenting, sorting, grouping) performed upon a data set, and then we define a crowdsourcing task as an arbitrary composition of these task types, according to the model below.
Model of a multi-platform crowdsourcing campaign
Model of a multi-platform crowdsourcing campaign.
  • social networking: We show how social platforms, such as Facebook or Twitter, can be used for crowdsourcing search-related tasks, side by side with traditional crowdsourcing platforms.
  • multi-platform integration: we allow deployment of abstract crowdsourcing tasks to several invitation and execution platforms, including Twitter, Facebook, Amazon Mechanical Turk, and we collect and aggregate results from every platform. Performance of crowds depends on the execution platform (e.g., Facebook and Twitter immediately collect a lot of responses but then more professional platforms like Doodle or Linkedin outperform them), on the task type (simpler tasks are responded more frequently), on the posting time, the topic, and the language of tasks.
the selection of the task execution platform might influence the time required to get answers from the crowd (Facebook features less latency, but Doodle brings more answers)
The selection of the task execution platform might influence the time required to get answers from the crowd (Facebook features less latency, but Doodle brings in more answers in the long term).
  • expertise finding: we analyze how performer profiling can be enriched with the social activity of the performer himself and of his friends or social connections. Experiments show how different profiling options can impact on the quality and efficiency of crowdsourcing campaigns.
  • reactive rules: reactive control is obtained through rules which are formally defined and whose properties (e.g., termination) can be easily proved. Rules are defined on top of data structures which are derived from the model of the application. Rules are written in reactive style, according to the ECA (Event-Condition-Action) paradigm and allow making decisions about the production of results, the classification of performers (e.g., identification of spammers), the early termination and re-planning of tasks based on some performance measures, the dynamic definition of micro-tasks, and so on.
Simple changes in the declarative reactive rules can significantly impact on the task quality and cost. Each curve represents a rule set for controlling majority agreement, respectively: simple majority after 7 evaluations (black line); strong majority after 3 evaluations or simple majority after 7 evaluations (red line); and an additional simple majority evaluation step after 5 evaluations (blue line).
Simple changes in the declarative reactive rules can significantly impact on the task quality and cost. Each curve represents a rule set for controlling majority agreement, respectively: simple majority after 7 evaluations (black line); strong majority after 3 evaluations or simple majority after 7 evaluations (red line); and an additional simple majority evaluation step after 5 evaluations (blue line).

Reactive rules allow significant savings in terms of execution time and number of executions, as well as improvements in precision of results. 

Our system is implemented as a cloud service, where crowdsourcing campaigns are configured through a Web user interface (see image below) or through API.

Configuration user interface of our multi-platform and reactive crowdsourcing platform, CrowdSearcher
Configuration user interface of our multi-platform and reactive crowdsourcing platform, CrowdSearcher (in this step the developer adds task type, title and textual description).

A short demo video of the system is available on YouTube:

For more, see our full paper, REACTIVE CROWDSOURCING (WWW 2013). [Slides]
Alessandro Bozzon, TU Delft
Marco Brambilla, Politecnico di Milano
Stefano Ceri, Politecnico di Milano
Andrea Mauri, Politecnico di Milano

Other relevant papers:
Choosing the right crowd: expert finding in social networks (EDBT 2013).
Answering search queries with CrowdSearcher (WWW 2012).

CrowdCamp Report: DesignOracle: Exploring a Creative Design Space with a Crowd of Non-Experts

For an unconstrained design problem like writing a fiction story or designing a poster, can a crowd of non-experts be shepherded to generate a diverse collection of high quality ideas? Can the crowd also generate the actual stories or posters? With the goal of building crowd-powered creativity support tools, a group of us, Paul André, Robin N. Brewer, Krzysztof Gajos, Yotam Gingold, Kurt Luther, Pao Siangliulue, and Kathleen Tuite, set out to answer these questions. We based our approach on a technique described by Keith Johnstone in his book Impro: Improvisation and the Theatre for extracting creativity from people who believe themselves to be uncreative. An uncreative person is told that a story is already prepared and he or she has merely to guess it via yes or no questions. Unbeknown to the guesser, there is no story; guesses are met with a yes or no response, essentially randomly. For example:

  1. Is it about CSCW? Yes
  2. Is it about CrowdCamp? No
  3. Is it about a bitter rivalry? Yes

As questioning proceeds, a consistent story is revealed, entirely due to the guesser generating and then externalizing an internally consistent mental model of a story (or poster, etc.) that justifies the given answers.

To evaluate the potential of this “StoryOracle” approach, we ran a series of experiments on Amazon Mechanical Turk:

  1. We extracted dozens of surprising and creative stories and poster designs using the technique.
  2. We explored the design space in a directed manner by generating variations on well-liked stories. Every question and yes or no answer provides a possible branch-point. To branch, we selected an important “pivot” question and showed all questions and answers up to the pivot, followed by the pivot question and the same or the opposite answer, to a set of new participants with instructions to continue guessing the story. For example: A story and branches
  3. We converted question-based stories into “normal” stories. For example:

    Jack is a scientist who has dedicated his life to discovering how to generate energy using fusion power because he believes it would benefit humanity. Eventually he discovers how to do it and tells his wife, Jane, the good news. Jane is not aware that this is a secret and talk about it with Carl (to whom she was in love before meeting Jack). Carl tries to steal the secret from Jack but Jane fought with him and is able to impede him.
    However, Jack discovers that Jane told Carl about the secret and they have a fight because of it. Eventually, Carl, who is angry with both of them, decides to kill Jack and Jane. He manages to kill the couple but when he is about to steal Jack’s secret, the power fusion discovery is released to the public through the internet.
    Thus, all the world is able to produce energy using Jack’s discovery and eventually his dream of providing a better quality of life to everyone comes true.

  4. We evaluated stories’ quality.
  5. We devised domain-specific prompts for questioners, such as the setting, theme, and characters in a story.

Taken together, in the two days of CrowdCamp we managed to build the foundation for a crowd-powered tool to explore a creative design space, in an undirected or directed manner, and generate a variety of high quality artifacts.

Paul André, Carnegie Mellon University
Robin N. Brewer, University of Maryland, Baltimore County
Krzysztof Gajos, Harvard University
Yotam Gingold, George Mason University
Kurt Luther, Carnegie Mellon University
Pao Siangliulue, Harvard University
Kathleen Tuite, University of Washington

Sketch Minimization using Crowds for Feedback

David Engel, Massachusetts Institute of Technology
Verena A. Kottler, Max Planck Institute for Developmental Biology
Christoph Malisi, Max Planck Institute for Developmental Biology
Marc Roettig, University Tuebingen
Eva-Maria Willing, Max Planck Institute for Plant Breeding Research
Sebastian J. Schultheiss, computomics.com

Design tasks are notoriously difficult, because success is defined by the perception of the target audience, whose feedback is usually not available during design stages. We present a design methodology for creating minimal sketches of objects that uses an iterative optimization scheme and brings the perception of the crowd into the loop.

Continue reading

TwitApp: In-product Micro-Blogging for Design Sharing

Wei Li, Autodesk Research
Tovi Grossman, Autodesk Research
Justin Matejka, Autodesk Research
George Fitzmaurice, Autodesk Research

The emergence of the World Wide Web has indeed made sharing design activities and workflows quite popular. It is common to see discussion forums with specific threads for users to share their work, with step by step instructions of how they did it. Many designers also maintain personal blogs, where they can post information about their current projects. While technologies such as blogs and discussion forums are effective ways to share designs and software application activities, these sites are typically external to the actual software application. As such, the content must be manually created which can be laborious. Because of the high authoring cost and low feedback ratio, blog users may not update their blogs frequently as they otherwise would.