ReTool: Interactive Microtask and Workflow Design through Demonstration

Recently, there has been an increasing number of crowdsourcing microtasks that require freeform interactions directly on the content (e.g. drawing bounding boxes over specific objects in an image; or marking specific time points on a video clip). However, existing crowdsourcing platforms, such as Amazon Mechanical Turk (MTurk) and CrowdFlower (CF), do not provide direct support for designing interactive microtasks. To design interactive microtasks, especially interactive microtasks with workflows, requesters have to use programming-based approaches, such as Turkit and AMT SDKs. However, the need of programming skills sets a significant threshold for many requesters.

wx20170731-1633282x

To lower the barrier of entry for designing and deploying interactive microtasks with workflows, we developed ReTool, a web-based tool that simplifies the process by applying “Programming by Demonstration” (PbD) concept. In our context, PbD refers to the mechanism by which requesters design interactive microtasks with workflows by giving an example of how the tasks can be completed.

Working with ReTool, a requester can design and publish microtasks following the four main steps:

  • Project Creation: The requester creates a project and uploads a piece of sample content to be crowdsourced.
  • Microtask and Workflow Generation: Depending on the type (text or image) of the sample content, a content specific workspace is generated. The requester then performs a sequence of interactions (e.g. tapping-and-dragging, clicking, etc.) on the content within the workspace. The interactions are recorded and analyzed to generate interactive microtasks with workflows.
  • Previewing Microtask Interface & Workflow: The requester can preview microtasks and workflows, edit instructions and other properties (e.g. worker number), add verification tasks and advanced workflows (conditional and looping workflow) at this step.
  • Microtask Publication: The requester uploads all content to be crowdsourced and receives a URL link for accessing available microtasks. The link can published to crowdsourcing marketplaces or social network platforms.

We conducted a user study to find out how potential requesters with varying programming skills use ReTool. We compared ReTool with a lower bound baseline, MTurk online design tool, as an alternative approach. We recruited 14 participants from different university faculties, taught them what is crowdsourcing and how to design microtasks using both tools. We then asked them to complete three design tasks. The results show that ReTool is able to help not only programmers, but also non-programmers and new crowdsourcers to design complex microtasks and workflows in a fairly short time.

For more details, please see our full paper ReTool: Interactive Microtask and Workflow Design through Demonstration published at CHI 2017.

Chen Chen, National University of Singapore

Xiaojun Meng, National University of Singapore

Shengdong Zhao, National University of Singapore

Morten Fjeld, Chalmers University of Technology

Respeak: Voice-based Crowd-powered Speech Transcription System

Recent years have seen the rise of crowdsourcing marketplaces like Amazon Mechanical Turk and CrowdFlower that provide people with additional earning opportunities. However, low-income, low-literate people in resource-constrained settings are often unable to use these platforms because they face a complex array of socioeconomic barriers, literacy constraints and infrastructural challenges. For example, 97% of the households in India do not have access to an Internet connected computer, 47% of the population does not have access to a bank account, and around 72% are illiterate with respect to English.

To design a microtasking platform that provides additional earning opportunities to low-income people with limited language and digital literacy skills, we designed, built, and evaluated Respeak – a voice-based, crowd-powered speech transcription system that combines the benefits of crowdsourcing with automatic speech recognition (ASR) to transcribe audio files in local languages like Hindi and localized accents of well-represented languages like English. Respeak allows people to use their basic spoken language skills – rather than typing skills – to transcribe audio files. Respeak employs a multi-step approach, involving a sequence of segmentation and merging steps:

  • Segmentation: The Respeak engine segments an audio file into utterances that are each three to six seconds long.
  • Distribution to Crowd Workers: Each audio segment is sent to multiple Respeak smartphone application users who listen to the segment and re-speak the same words into the application in a quiet environment.
  • Transcription using ASR: The application uses a built-in Google’s Android speech recognition API to generate an instantaneous transcript for the segment, albeit with some errors. The user then submits this transcript to the Respeak
  • First-stage Merging: For each segment, the Respeak engine combines the output transcripts obtained from each user into one best estimation transcript by using multiple string alignment and majority voting. If errors are randomly distributed, aligning transcripts generated by multiple people reduces the word error rate (WER). Each submitted transcript earns Respeak users a reward of mobile talktime depending on the similarity between the transcript they submitted and the best estimation transcript generated by Respeak. Once the cumulative earnings of a user reaches 10 INR, a mobile talktime transfer of the same value is processed to them.
  • Second-stage Merging: Finally, the engine concatenates the best estimation transcript for each segment to yield a final transcript.
Respeak System Overview
Respeak System Overview

We conducted three cognitive experiments with 24 students in India to evaluate:

  • How audio segment length affects content retention and cognitive load experienced by a Respeak user
  • The impact on content retention and cognitive load when segments are presented in a sequential vs. random order
  • Whether speaking or typing proves to be a more efficient and usable output medium for Respeak users.

The experiments revealed that audio files should be partitioned by detecting natural pauses to yield segments of less than 6-seconds in length. These segments should be presented sequentially to ensure higher retention and less cognitive load on users. Lastly, speaking outperformed typing not only on speed, but also on the WER suggesting that users should complete micro-transcription tasks by speaking rather than typing.

We then deployed Respeak in India for one month with 25 low-income college students. The Respeak engine segmented 21 widely varying audio files in Hindi and Indian English into 756 short segments. Collectively, Respeak users performed 5464 micro tasks to transcribe 55 minutes of audio content, and earned USD 46. Respeak produced transcriptions with an average WER of 10%. The cost of speech transcription was USD 0.83 per minute. In addition to providing airtime to users, Respeak also improved their vocabulary and pronunciation skills. The expected payout for an hour of their time was 76 INR (USD 1.16) – one-fourth of the average daily wage rate in India. Since voice is a natural and accessible medium of interaction, Respeak has a strong potential to be an inclusive and accessible platform. We are conducting more deployments with low-literate people and blind people to examine the effectiveness of Respeak.

For more details, please read our full paper published at CHI 2017 here.

Aditya Vashistha, University of Washington

Pooja Sethi, University of Washington

Richard Anderson, University of Washington

Subcontracting Microwork

Mainstream crowdwork platforms treat microtasks as indivisible units; however, in our upcoming CHI 2017 paper, we propose that there is value in re-examining this assumption. We argue that crowdwork platforms can improve their value proposition for all stakeholders by supporting subcontracting within microtasks.

We define three models for microtask subcontracting: real-time assistance, task management, and task improvement:

  • Real-time assistance encompasses a model of subcontracting in which the primary worker engages one or more secondary workers to provide real-time advice, assistance, or support during a task
  • Task management subcontracting applies to situations in which a primary worker takes on a meta-work role for a complex task, delegating components to secondary workers and taking responsibility for integrating and/or approving the products of the secondary workers’ labor.
  • Task improvement subcontracting entails allowing a primary worker to edit task structure, including clarifying instructions, fixing user interface components, changing the task workflow, and adding, removing, or merging sub-tasks.

Subcontracting of microwork fundamentally alters many of the assumptions currently underlying crowd work platforms, such as economic incentive models and the efficacy of some prevailing workflows. However, subcontracting also legitimizes and codifies some existing informal practices that currently take place off-platform. In our paper, we identify five key issues crucial to creating a successful subcontracting structure, and reflect on design alternatives for each: incentive models, reputation models, transparency, quality control, and ethical considerations.

To learn more about worker motivations for engaging with subcontracting workflows, we conducted some experimental HITs on mTurk. In one, workers had the choice of whether to complete a complex, three-part task, or to choose to subcontract portions to other (hypothetical) workers (and give up some of the associated pay); we then asked these workers why they did or did not choose to subcontract each task component. Money, skills, and interests all factored into these decisions in complex ways.

Implementing and exploring the parameter space of the subcontracting concepts we propose is a key area for future research. Building platforms that support subcontracting workflows in an intentional manner will enable the crowdwork research community to evaluate the efficacy of these choices and further refine this concept. We particularly stress the importance of the ethical considerations component, as our intent in introducing and formalizing concepts related to subcontracting microwork is to facilitate more inclusive, satisfying, efficient, and high-quality work, rather than to facilitate extreme task decomposition strategies that may result in deskilling or untenable wages.

You can download our CHI 2017 paper to read about subcontracting in more detail.  (Fun fact — the idea for this paper began at the CrowdCamp Workshop at HCOMP 2015 in San Diego; Hooray for CrowdCamp!)

Subcontracting Microwork. Proceedings of CHI 2017. Meredith Ringel Morris (Microsoft Research), Jeffrey P. Bigham (Carnegie Mellon University), Robin Brewer (Northwestern University), Jonathan Bragg (University of Washington), Anand Kulkarni (UC Berkeley), Jessie Li (Carnegie Mellon University), and Saiph Savage (West Virginia University).