Call for Participation: GroupSight 2017

The Second Workshop on Human Computation for Image and Video Analysis (GroupSight) is to be held on October 24, 2017 at AAAI HCOMP 2017 at Québec City, Canada. This promises an exciting mix of people and papers at the intersection of HCI, crowdsourcing, and computer vision.

The aim of this workshop is to promote greater interaction between the diversity of researchers and practitioners who examine how to mix human and computer efforts to convert visual data into discoveries and innovations that benefit society at large. It will foster in-depth discussion of technical and application issues for how to engage humans with computers to optimize cost/quality trade-offs. It will also serve as an introduction to researchers and students curious about this important, emerging field at the intersection of crowdsourced human computation and image/video analysis.

Topics of Interest

Crowdsourcing image and video annotations (e.g., labeling methods, quality control, etc.)
Humans in the loop for visual tasks (e.g., recognition, segmentation, tracking, counting, etc.)
Richer modalities of communication between humans and visual information (e.g., language, 3D pose, attributes, etc.)
Semi-automated computer vision algorithms
Active visual learning
Studies of crowdsourced image/video analysis in the wild

Submission Details

Submissions are requested in the following two categories: Original Work (not published elsewhere) and Demo (describing new systems, architectures, interaction techniques, etc.). Papers should be submitted as 4-page extended abstracts (including references) using the provided author kit. Demos should also include a URL to a video (max 6 min). Multiple submissions are not allowed. Reviewing will be double-blind.
Previously published work from a recent conference or journal can be considered but the authors should submit an unrevised copy of their published work. Reviewing will be single-blind. Email submissions to groupsight@outlook.com

Important Dates

August 14August 23, 2017: Deadline for paper submission (5:59 pm EDT)
August 25, 2017: Notification of decision
October 24, 2017: Workshop (full-day)

Link

https://groupsight.github.io

ReTool: Interactive Microtask and Workflow Design through Demonstration

Recently, there has been an increasing number of crowdsourcing microtasks that require freeform interactions directly on the content (e.g. drawing bounding boxes over specific objects in an image; or marking specific time points on a video clip). However, existing crowdsourcing platforms, such as Amazon Mechanical Turk (MTurk) and CrowdFlower (CF), do not provide direct support for designing interactive microtasks. To design interactive microtasks, especially interactive microtasks with workflows, requesters have to use programming-based approaches, such as Turkit and AMT SDKs. However, the need of programming skills sets a significant threshold for many requesters.

wx20170731-1633282x

To lower the barrier of entry for designing and deploying interactive microtasks with workflows, we developed ReTool, a web-based tool that simplifies the process by applying “Programming by Demonstration” (PbD) concept. In our context, PbD refers to the mechanism by which requesters design interactive microtasks with workflows by giving an example of how the tasks can be completed.

Working with ReTool, a requester can design and publish microtasks following the four main steps:

  • Project Creation: The requester creates a project and uploads a piece of sample content to be crowdsourced.
  • Microtask and Workflow Generation: Depending on the type (text or image) of the sample content, a content specific workspace is generated. The requester then performs a sequence of interactions (e.g. tapping-and-dragging, clicking, etc.) on the content within the workspace. The interactions are recorded and analyzed to generate interactive microtasks with workflows.
  • Previewing Microtask Interface & Workflow: The requester can preview microtasks and workflows, edit instructions and other properties (e.g. worker number), add verification tasks and advanced workflows (conditional and looping workflow) at this step.
  • Microtask Publication: The requester uploads all content to be crowdsourced and receives a URL link for accessing available microtasks. The link can published to crowdsourcing marketplaces or social network platforms.

We conducted a user study to find out how potential requesters with varying programming skills use ReTool. We compared ReTool with a lower bound baseline, MTurk online design tool, as an alternative approach. We recruited 14 participants from different university faculties, taught them what is crowdsourcing and how to design microtasks using both tools. We then asked them to complete three design tasks. The results show that ReTool is able to help not only programmers, but also non-programmers and new crowdsourcers to design complex microtasks and workflows in a fairly short time.

For more details, please see our full paper ReTool: Interactive Microtask and Workflow Design through Demonstration published at CHI 2017.

Chen Chen, National University of Singapore

Xiaojun Meng, National University of Singapore

Shengdong Zhao, National University of Singapore

Morten Fjeld, Chalmers University of Technology

Respeak: Voice-based Crowd-powered Speech Transcription System

Recent years have seen the rise of crowdsourcing marketplaces like Amazon Mechanical Turk and CrowdFlower that provide people with additional earning opportunities. However, low-income, low-literate people in resource-constrained settings are often unable to use these platforms because they face a complex array of socioeconomic barriers, literacy constraints and infrastructural challenges. For example, 97% of the households in India do not have access to an Internet connected computer, 47% of the population does not have access to a bank account, and around 72% are illiterate with respect to English.

To design a microtasking platform that provides additional earning opportunities to low-income people with limited language and digital literacy skills, we designed, built, and evaluated Respeak – a voice-based, crowd-powered speech transcription system that combines the benefits of crowdsourcing with automatic speech recognition (ASR) to transcribe audio files in local languages like Hindi and localized accents of well-represented languages like English. Respeak allows people to use their basic spoken language skills – rather than typing skills – to transcribe audio files. Respeak employs a multi-step approach, involving a sequence of segmentation and merging steps:

  • Segmentation: The Respeak engine segments an audio file into utterances that are each three to six seconds long.
  • Distribution to Crowd Workers: Each audio segment is sent to multiple Respeak smartphone application users who listen to the segment and re-speak the same words into the application in a quiet environment.
  • Transcription using ASR: The application uses a built-in Google’s Android speech recognition API to generate an instantaneous transcript for the segment, albeit with some errors. The user then submits this transcript to the Respeak
  • First-stage Merging: For each segment, the Respeak engine combines the output transcripts obtained from each user into one best estimation transcript by using multiple string alignment and majority voting. If errors are randomly distributed, aligning transcripts generated by multiple people reduces the word error rate (WER). Each submitted transcript earns Respeak users a reward of mobile talktime depending on the similarity between the transcript they submitted and the best estimation transcript generated by Respeak. Once the cumulative earnings of a user reaches 10 INR, a mobile talktime transfer of the same value is processed to them.
  • Second-stage Merging: Finally, the engine concatenates the best estimation transcript for each segment to yield a final transcript.
Respeak System Overview
Respeak System Overview

We conducted three cognitive experiments with 24 students in India to evaluate:

  • How audio segment length affects content retention and cognitive load experienced by a Respeak user
  • The impact on content retention and cognitive load when segments are presented in a sequential vs. random order
  • Whether speaking or typing proves to be a more efficient and usable output medium for Respeak users.

The experiments revealed that audio files should be partitioned by detecting natural pauses to yield segments of less than 6-seconds in length. These segments should be presented sequentially to ensure higher retention and less cognitive load on users. Lastly, speaking outperformed typing not only on speed, but also on the WER suggesting that users should complete micro-transcription tasks by speaking rather than typing.

We then deployed Respeak in India for one month with 25 low-income college students. The Respeak engine segmented 21 widely varying audio files in Hindi and Indian English into 756 short segments. Collectively, Respeak users performed 5464 micro tasks to transcribe 55 minutes of audio content, and earned USD 46. Respeak produced transcriptions with an average WER of 10%. The cost of speech transcription was USD 0.83 per minute. In addition to providing airtime to users, Respeak also improved their vocabulary and pronunciation skills. The expected payout for an hour of their time was 76 INR (USD 1.16) – one-fourth of the average daily wage rate in India. Since voice is a natural and accessible medium of interaction, Respeak has a strong potential to be an inclusive and accessible platform. We are conducting more deployments with low-literate people and blind people to examine the effectiveness of Respeak.

For more details, please read our full paper published at CHI 2017 here.

Aditya Vashistha, University of Washington

Pooja Sethi, University of Washington

Richard Anderson, University of Washington

Communicating Context to the Crowd

context-matters

Crowdsourcing has traditionally consisted of short, independent microtasks that require no background. The advantage of this strategy is that work can be decoupled and assigned to independent workers. But this strategy struggles to support tasks that are increasingly complex such as writing or programming and are not independent of their context.

For instance, imagine that you ask a crowd worker to write a biography for a speaker you’ve invited to your workshop. After the work is completed you realize that the biography is written in an informal, personal tone. This is not technically wrong, it’s just not what you had in mind. You realize that you could have added a line to your task description asking for a formal/academic tone. However, there are countless nuances to a writing task that can’t all be predicted beforehand. This is what we are interested in: the context of a task meaning the collection of conditions and tacit information surrounding the task (e.g. the fact that the biography is needed for an academic workshop).

IF THIS INFORMATION CAN’T BE PRE-PACKAGED AND SENT ALONG WITH THE TASK, WHAT CAN WE DO ABOUT IT? 

context

OUR APPROACH IS TO ITERATE: do some work, communicate with the requester, and edit to fix errors. How can we support communication between the requester and crowd workers to maximize benefits while minimizing costs? If achieved, this goal would create the conditions for crowd work that is more complex and integrated than currently possible.

The main take away is to support this communication through structured microtasks. We have designed 5 different mechanisms for structured communication:

context3

We compare these methods in two studies, the first measuring the benefit of each mechanism, and the second measuring the costs to the requester (e.g. cognitive demand). We found that these mechanisms are most effective when writing is in the early phases. For text that is already high quality, the mechanisms become less effective and can even be counter-productive.

context4

We also found that the mechanisms had varying benefits depending on the quality of the initial text. Early on, when content quality is poor, the requester needs to communicate major issues. Therefore identifying the “main problem” was most effective at improving writing. Later, for average quality content, the different mechanisms have relatively similar added value.

Finally, we found that the cost of a mechanism for the requester is not always correlated with the value that it adds. For instance, for average quality paragraphs, commenting/editing was very costly but did not provide more value than simply highlighting.

 

For more, see our full paper, Communicating Context to the Crowd for Complex Writing Tasks.
Niloufar Salehi, Stanford University
Jaime Teevan, Microsoft Research
Shamsi Iqbal, Microsoft Research
Ece Kamar, Microsoft Research

Who does make a topic trending on Twitter?

Users on social media sites like Twitter are increasingly relying on crowdsourced recommendations called Trending Topics to find important events and breaking news stories. Topics (mostly keywords, e.g., hashtags) are recommended as trending when they exhibit a sharp spike in their popularity, i.e., their usages by the crowds suddenly jump at a particular time.

While prior works have attempted to classify and predict Twitter trending topics, in this work, we ask a different question — who are the users who make different topics worthy of being recommended as trending?

Specifically, we analyse the demographics of the crowds promoting different trending topics on the Twitter social media. By promoters of a topic, we refer to the users who posted on the topic before it became trending, thereby contributing to the topic’s selection as a trend.

We gathered extensive data from Twitter from July to September, 2016, including millions of users posting on thousands of topics, both before and after the topics became trending. We inferred three demographic attributes for these Twitter users — their gender, race (Asian / Black / White), and age — from their profile photos.

Looking at the demographics of the promoters reveals interesting patterns. For instance, here are the gender and racial demographics of the promoters of some of the Twitter trends on 3rd May 2017:

  • #wikileaks: 24% women, and 76% men
  • #wednesdayWisdom: 52% women, and 48% men
  • #comey: 9% Asian, 12% Black, and 79% White
  • #BlackWomenAtWork: 15% Asian, 52% Black, and 33% White

It is evident that different trends are promoted by widely different demographic groups.

trend-race-distribution

Our analysis led to the following insights:

  • A large fraction of trending topics are promoted by crowds whose demographics are significantly different from Twitter’s overall user population.
  • We find clear evidence of under-representation of certain demographic groups among the promoters of trending topics, with mid-aged-black-females being the most under-represented group.
  • Once a topic becomes trending, it is adopted (i.e., posted) by users whose demographics are less divergent from the overall Twitter population, compared to the users who were promoting the topic before it became trending.
  • Topics promoted predominantly by a single demographic group tend to be of niche interest to that particular group.
  • During events of wider interest (e.g., national elections, police shootings), the topics promoted by different demographic groups tend to reflect their diverse perspectives, which could help understand the different facets of public opinion.

Try out our Twitter app to check demographics of the crowds promoting various trends.

For details, see our full paper, Who Makes Trends? Understanding Demographic Biases in Crowdsourced Recommendations, at ICWSM 2017.

Abhijnan Chakraborty, IIT Kharagpur, India and MPI-SWS, Germany
Johnnatan Messias, Federal University of Minas Gerais, Brazil
Fabricio Benevenuto, Federal University of Minas Gerais, Brazil
Saptarshi Ghosh, IIT Kharagpur, India
Niloy Ganguly, IIT Kharagpur, India
Krishna P. Gummadi, MPI-SWS, Germany

Don’t Bother Me. I’m Socializing!

IT DOES BOTHER US when we see our friends checking their smartphones while having a conversation with us. Although people want to focus on a conversation, it is hard to ignore a series of notification alarms coming from their smartphones. It is reported that smartphone users receive an average of tens to hundreds of push notifications a day [1,2]. Despite its usefulness in immediate delivery of information, an untimely smartphone notification is considered a source of distraction and annoyance during social interactions.

deferred_notifications
(Left) Notifications interrupt an ongoing social interaction. (Right) Notifications are deferred to a breakpoint, in-between two activities, so that people are less interrupted by notifications.

 

TO ADDRESS THIS PROBLEM, we have proposed a novel notification management scheme, in which the smartphone defers notifications until an opportune moment during social interactions. A breakpoint [3] is a term originated from psychology that describes a unit of time in between two adjacent actions. The intuition is that there exist breakpoints in which notifications do not, if so minimally, interrupt a social interaction.

video_survey_screenshot
A screenshot of the video survey. Participants are asked to respond whether this moment is appropriate to receive a notification.

TO DISCOVER SUCH BREAKPOINTS, we devised a video survey in which participants watch a typical social interaction scenario and respond whether prompted moments in the video are appropriate moments to receive smartphone notifications. People responded that the following four types of breakpoints are appropriate breakpoints in a social interaction; (1) a long silence, (2) a user leaving the table, (3) others using smartphones, and (4) a user left alone.

SCAN_social_context
Types of social context detected by SCAN.

BASED ON THE INSIGHTS FROM THE VIDEO SURVEY, we designed and implemented a Social Context-Aware smartphone Notification system, SCAN, that defers smartphone notifications until a breakpoint. SCAN is a mobile application that detects social context using only built-in sensors. It also works collaboratively with the rest of the group members’ smartphones to sense collocated members, conversation, and others’ smartphone use. SCAN then classifies a breakpoint based on the social context and decides whether to deliver or defer notifications.

SCAN HAS BEEN EVALUATED on ten groups of friends in a controlled setting. SCAN detects four target breakpoint types with high accuracy (precision= 92.0%, recall= 82.5%). Most participants appreciated the value of deferred notifications and found the selected breakpoints appropriate. Overall, we demonstrated that breakpoint-based smartphone notification management is a promising approach to reducing interruptions during social interactions.

WE ARE CURRENTLY EXTENDING SCAN to apply it to various types of social interactions. We also aim to add personalized notification management and to address technical challenges such as system robustness and energy efficiency. Our ultimate goal is to release SCAN as an Android application in Google Play Store and help users to be less distracted by smartphone notifications during social interactions.

You can check out our CSCW 2017 paper to read about this work in more detail.  

“Don’t Bother Me. I’m Socializing!: A Breakpoint-Based Smartphone Notification System”. Proceedings of CSCW 2017. Chunjong Park, Junsung Lim, Juho Kim, Sung-Ju Lee, and Dongman Lee (KAIST)


[1]”An In-situ Study of Mobile Phone Notifications”. Proceedings of MobileHCI 2014. Martin Pielot, Karen Church, and Rodrigo de Oliveira.
[2] “Hooked on Smartphones: An Exploratory Study on Smartphone Overuse Among College Students”. Proceedings of CHI 2014. Uichin Lee, Joonwon Lee, Minsam Ko, Changhun Lee, Yuhwan Kim, Subin Yang, Koji Yatani, Gahgene Gweon, Kyong-Mee Chung, and Junehwa Song.
[3] “The perceptual organization of ongoing behavior”. Journal of Experimental Social Psychology 12, 5 (1976), 436–450. Darren Newtson and Gretchen Engquist.

Subcontracting Microwork

Mainstream crowdwork platforms treat microtasks as indivisible units; however, in our upcoming CHI 2017 paper, we propose that there is value in re-examining this assumption. We argue that crowdwork platforms can improve their value proposition for all stakeholders by supporting subcontracting within microtasks.

We define three models for microtask subcontracting: real-time assistance, task management, and task improvement:

  • Real-time assistance encompasses a model of subcontracting in which the primary worker engages one or more secondary workers to provide real-time advice, assistance, or support during a task
  • Task management subcontracting applies to situations in which a primary worker takes on a meta-work role for a complex task, delegating components to secondary workers and taking responsibility for integrating and/or approving the products of the secondary workers’ labor.
  • Task improvement subcontracting entails allowing a primary worker to edit task structure, including clarifying instructions, fixing user interface components, changing the task workflow, and adding, removing, or merging sub-tasks.

Subcontracting of microwork fundamentally alters many of the assumptions currently underlying crowd work platforms, such as economic incentive models and the efficacy of some prevailing workflows. However, subcontracting also legitimizes and codifies some existing informal practices that currently take place off-platform. In our paper, we identify five key issues crucial to creating a successful subcontracting structure, and reflect on design alternatives for each: incentive models, reputation models, transparency, quality control, and ethical considerations.

To learn more about worker motivations for engaging with subcontracting workflows, we conducted some experimental HITs on mTurk. In one, workers had the choice of whether to complete a complex, three-part task, or to choose to subcontract portions to other (hypothetical) workers (and give up some of the associated pay); we then asked these workers why they did or did not choose to subcontract each task component. Money, skills, and interests all factored into these decisions in complex ways.

Implementing and exploring the parameter space of the subcontracting concepts we propose is a key area for future research. Building platforms that support subcontracting workflows in an intentional manner will enable the crowdwork research community to evaluate the efficacy of these choices and further refine this concept. We particularly stress the importance of the ethical considerations component, as our intent in introducing and formalizing concepts related to subcontracting microwork is to facilitate more inclusive, satisfying, efficient, and high-quality work, rather than to facilitate extreme task decomposition strategies that may result in deskilling or untenable wages.

You can download our CHI 2017 paper to read about subcontracting in more detail.  (Fun fact — the idea for this paper began at the CrowdCamp Workshop at HCOMP 2015 in San Diego; Hooray for CrowdCamp!)

Subcontracting Microwork. Proceedings of CHI 2017. Meredith Ringel Morris (Microsoft Research), Jeffrey P. Bigham (Carnegie Mellon University), Robin Brewer (Northwestern University), Jonathan Bragg (University of Washington), Anand Kulkarni (UC Berkeley), Jessie Li (Carnegie Mellon University), and Saiph Savage (West Virginia University).

Spare5’s Tips for Sourcing Better Training Data

Mere minutes after our awesome advisor, Dan Weld, mentioned The 4th AAAI Conference on Human Computation and Crowdsourcing (HCOMP), we were all-in. It’s rare to scroll through an event program and realize that each and every session is going to be so relevant and useful to your work, but that’s exactly how we were all feeling with this event’s agenda. And it did not disappoint!

We returned to Seattle from Austin newly excited and energized to enable folks to earn spare change in their spare time in a fun, engaging way, while providing practitioners with custom, quality, accurate machine learning and AI training data.

Our decision to sponsor HCOMP required very little human computation, and we were thrilled to give a keynote talk on our tips for sourcing better training data. We’ve created an online version of our presentation deck for your reference; hope it’s helpful.

As a brief review, we recommend:

  • great UI & UX for annotators
  • interactive workflow design on mobile & web
  • known, trained, qualified annotators
  • real-time QA & annotator management
  • algorithmic task distribution & quality scoring

Details in the deck.

If you’d like to learn more about these ideas or have something to add, please give us a shout. We’re also particularly interested in the topic of bias in training data, so if this is a concern of yours as well, get in touch and let’s study it together (we’ll bring the data!).

Finally, as we noted in our talk, we’re hiring! We’re growing our data science team and looking for computer vision experts specifically. Check out our openings if you’re looking for your next great opportunity.

A big thanks to everyone at HCOMP. We had a great time and look forward to continuing the many discussions we started there.

Until next year!

— Spare5

Report: GroupSight Workshop at HCOMP 2016 – Human Computation for Image and Video Analysis

The GroupSight workshop hit a surprisingly resonant chord with researchers at the intersection of human computation and computer vision in its first year at HCOMP 2016. My co-organizers, Danna Gurari (UT Austin) and Steve Branson (Caltech), and I sought to bring together people from widely different areas of computer vision and computational photography to explore how CV researchers are using the crowd. Attendees included researchers and students curious about this important, emerging field at the intersection of crowdsourced human computation and image/video analysis. About 30 attendees were treated to an unexpected diversity of approaches to crowd computation from some of the most exciting researchers in CV, including talks from:

– Kristen Grumman (UT Austin) : Active and Interactive Image and Video Segmentation

– Kavita Bala (Cornell/GrokStyle) : Crowdsourcing for Material Recognition in the Wild

– Ariel Shamir (Interdisciplinary Center Israel) : Passive Human Computation

– Kotaro Hara (U Maryland, College Park) : Using Crowdsourcing, Computer Vision, and Google Street View to Collect Sidewalk Accessibility Data

– Brendan McCord (Evolve Tech) : AI + IQ: Building Best of Breed Security Systems

We had short talks from 6 students on topics ranging from geolocation to medical imaging to clustering and summarization, as well as encore-track poster presentations from HCOMP and ECCV. (Their papers can be found here.)

Best paper winnerShay Sheinfeld Best paper runner-upMehrnoosh Sameki

Best paper winner Shay Sheinfeld presented work with Yotam Gingold and Ariel Shamir demonstrating a truly inventive use of the crowd for Video Summarization using Crowdsourced Causality Graphs. Attendees marveled at state-of-the art video summarization that made it seem like our future AI video editors have finally arrived. Best paper runner-up Mehrnoosh Sameki delighted us with surreal medical videos of cells splitting and joining all the while maintaining a near perfect segmentation contour achieved by an interactive pipeline.

Our industrial sponsor Evolv Technology hosted a cozy lunch where students and and senior researchers were able to discuss how to advance novel research in this nascent area of scientific exploration.

The aim of this workshop was to promote greater interaction between the diversity of researchers and practitioners who examine how to mix human and computer efforts to convert visual data into discoveries and innovations that benefit society at large. We succeeded in fostering an in-depth discussion of technical and application issues for how to engage humans with computers to optimize cost/quality trade-offs. My big take-away was that if you are doing CV or Graphics research, there is undoubtedly a cool way to exploit human intelligence in your pipeline for unexpected and remarkable outcomes. We look forward to future iterations of GroupSight at HCOMP and possibly ICCV or a future CVPR. If you are interested in participating in a future GroupSight, please don’t hesitate to contact one of us!

CrowdCamp Report: Gathering Causality Labels

Correlation does not imply causation. This phrase gets thrown around by scientists, statisticians, and laypeople all the time. It means that you shouldn’t use data about two things to infer that one thing causes the other, at least not without making a lot of limiting assumptions. But it is difficult to imagine ignoring causal inference when it seems to be such a key ingredient of intelligent decision-making. Machine learning approaches exist for using data to estimate causal structure, but we think it’s interesting that humans seem to judge causality without even looking at data. So, the goal of our CrowdCamp project was to gather some such judgements from real people.

To start, we make a list of variable names for which we hypothesize humans might have opinions about causal relationships without ever (or at least recently) having looked at the related data. Some of these variables include:

Real Daily Wages, Oil Prices, Internet Traffic, Residential Gas Usage, Power Consumption, Precipitation, Water Usage, Traffic Fatalities, Passenger Miles Flown in Aircraft, Auto Registration, Bus Ridership, Copper Prices, Wheat Harvest, Private Housing Units Started, Power Plant Expenditures, Price of Chicken, Sales of Shampoo, Beer Shipments, Percent of Men with Full Beards,
Pigs Slaughtered, Cases of Measles, Thickness of Ozone Layer, etc.

Using Amazon Mechanical Turk (AMT), we presented workers with sets of ten randomly chosen pairs of variables, and we asked them to choose the most fitting causal relationship between variable A and variable B between these four choices:

  • A causes B
  • B causes A
  • Other variable Z causes A and B
  • No causal relationship

Workers were advised “it’s possible that A and B may be related in several of the above ways. If you feel this is the case, choose the one that you believe is the strongest relationship.”

Example variable pair presented to crowd
Example variable pair presented to crowd

We collected 10 judgements from 50 workers, for a total of 500 judgements on pairs of 42 variables. When workers chose the option of a third variable causing both presented variables, we asked them to name the third variable (though we didn’t force them to). Of the 500 judgements, 74 of them were A->B, 85 were B->A, 34 were Z->A&B, and 307 were no causality. The most common one-directional causality judgements were:

1. Church Attendance -> Internet Traffic
2. Alcohol Demand -> Public Drunkenness
3. Federal Reserve Interest Rate -> Price of Chicken
4. Bus Ridership -> Oil Prices
5. Alcohol Demand -> Number of Forest Fires
6. Public Drunkenness -> Armed Robberies
7. Power Consumption -> Birth Rate
8. Church Attendance -> Armed Robberies
9. Bus Ridership -> Birth Rate
10. Price of Chicken -> Total Rainfall

Many of these are not surprising. Of course interest rates affect prices and alcohol consumption affects drunkenness. Others not so much… why would chicken prices affect rainfall? Also, we realize we only asked about the strength of the causal relationship, not the sign. So we have no way of knowing whether the workers believe going to church causes an increase or a decrease in armed robberies.

We also collected some interesting answers for the optional third variable Z causing both A and B. Most of the time it was some big general factor like population, economic conditions, geographical area, or fuel prices. There were some creative ones too:

A: Deaths from Homicides
B: Beer Shipments
Z: Thieves trying to intercept and steal beer shipments

So we collected all these judgements, now what do we do with them? As for machine learning applications, we see three options:

  1. Use as training/testing labels for causal inference techniques.
  2. See how well they serve for building informative priors to regularize regression problems.
  3. Use them to guide structure learning in probabilistic graphical models.

In conclusion, it was interesting to see how workers on AMT perceived causal relationships between economic, demographic, and miscellaneous variables by only looking at the names of the variables rather than actual data. We think it would be useful to take such qualitative “common-sense” preconceptions into account when designing automatic models of inference.

Alex Braylan, University of Texas at Austin
Kanika Kalra, Tata Research
Tyler McDonnell, University of Texas at Austin