CrowdCamp Report: Gathering Causality Labels

Correlation does not imply causation. This phrase gets thrown around by scientists, statisticians, and laypeople all the time. It means that you shouldn’t use data about two things to infer that one thing causes the other, at least not without making a lot of limiting assumptions. But it is difficult to imagine ignoring causal inference when it seems to be such a key ingredient of intelligent decision-making. Machine learning approaches exist for using data to estimate causal structure, but we think it’s interesting that humans seem to judge causality without even looking at data. So, the goal of our CrowdCamp project was to gather some such judgements from real people.

To start, we make a list of variable names for which we hypothesize humans might have opinions about causal relationships without ever (or at least recently) having looked at the related data. Some of these variables include:

Real Daily Wages, Oil Prices, Internet Traffic, Residential Gas Usage, Power Consumption, Precipitation, Water Usage, Traffic Fatalities, Passenger Miles Flown in Aircraft, Auto Registration, Bus Ridership, Copper Prices, Wheat Harvest, Private Housing Units Started, Power Plant Expenditures, Price of Chicken, Sales of Shampoo, Beer Shipments, Percent of Men with Full Beards,
Pigs Slaughtered, Cases of Measles, Thickness of Ozone Layer, etc.

Using Amazon Mechanical Turk (AMT), we presented workers with sets of ten randomly chosen pairs of variables, and we asked them to choose the most fitting causal relationship between variable A and variable B between these four choices:

  • A causes B
  • B causes A
  • Other variable Z causes A and B
  • No causal relationship

Workers were advised “it’s possible that A and B may be related in several of the above ways. If you feel this is the case, choose the one that you believe is the strongest relationship.”

Example variable pair presented to crowd
Example variable pair presented to crowd

We collected 10 judgements from 50 workers, for a total of 500 judgements on pairs of 42 variables. When workers chose the option of a third variable causing both presented variables, we asked them to name the third variable (though we didn’t force them to). Of the 500 judgements, 74 of them were A->B, 85 were B->A, 34 were Z->A&B, and 307 were no causality. The most common one-directional causality judgements were:

1. Church Attendance -> Internet Traffic
2. Alcohol Demand -> Public Drunkenness
3. Federal Reserve Interest Rate -> Price of Chicken
4. Bus Ridership -> Oil Prices
5. Alcohol Demand -> Number of Forest Fires
6. Public Drunkenness -> Armed Robberies
7. Power Consumption -> Birth Rate
8. Church Attendance -> Armed Robberies
9. Bus Ridership -> Birth Rate
10. Price of Chicken -> Total Rainfall

Many of these are not surprising. Of course interest rates affect prices and alcohol consumption affects drunkenness. Others not so much… why would chicken prices affect rainfall? Also, we realize we only asked about the strength of the causal relationship, not the sign. So we have no way of knowing whether the workers believe going to church causes an increase or a decrease in armed robberies.

We also collected some interesting answers for the optional third variable Z causing both A and B. Most of the time it was some big general factor like population, economic conditions, geographical area, or fuel prices. There were some creative ones too:

A: Deaths from Homicides
B: Beer Shipments
Z: Thieves trying to intercept and steal beer shipments

So we collected all these judgements, now what do we do with them? As for machine learning applications, we see three options:

  1. Use as training/testing labels for causal inference techniques.
  2. See how well they serve for building informative priors to regularize regression problems.
  3. Use them to guide structure learning in probabilistic graphical models.

In conclusion, it was interesting to see how workers on AMT perceived causal relationships between economic, demographic, and miscellaneous variables by only looking at the names of the variables rather than actual data. We think it would be useful to take such qualitative “common-sense” preconceptions into account when designing automatic models of inference.

Alex Braylan, University of Texas at Austin
Kanika Kalra, Tata Research
Tyler McDonnell, University of Texas at Austin

CrowdCamp Report: Finding Word Similarity with a Human Touch

Semantic similarity or semantic relatedness are features of natural language that contribute to the challenge machines face when analyzing text. Although semantic relatedness is still a complex challenge only few ground truth data set exist. We argue that the available corpora used to evaluate the performance of natural language tools do not capture all elements of the phenomenon. We present a set of simple interventions that illustrate 1) framing effects influence similarity perception, 2) the distribution of similarity across multiple users is important and 3) semantic relatedness is asymmetric.

A number of metrics in the literature attempt to model and evaluate semantic similarity in natural languages. Semantic similarity has applications in areas such as semantic search, text mining, etc. The concept of semantic similarity has long been considered as a more specific concept than the concept of semantic relatedness. Semantic relatedness, as it includes the concepts of antonymy and meronymy, is more generic than semantic similarity.

Different approaches have been attempted to measure semantic relatedness and similarity. Some methods use structured taxonomies such as WordNet alternative approaches define relatedness between words using search engines (e.g., based on Google counts) or  Wikipedia. All of these methods are evaluated based on the correlation with human ratings. Yet only few benchmark data sets exist. One of the most widely used being the WS-353 data-set [1]. As the corpus is very small and the sample size per pair is low it is arguable if all relevant phenomena are in fact present in the provided data set.

In this study, we aim to understand how human raters perceive word-based semantic relatedness. We argue that asking simple word-based semantic similarity is beyond the scope of existing test sets. Our hypotheses in this paper are as follows:

(H1) The framing effect influences similarity rating by human assessors.
(H2) The distribution of similarity rating does not follow a normal distribution.
(H3) Semantic relatedness is not symmetric. The relatedness between words (e.g., tiger and cat) yields different similarity ratings in a different word order.

To verify our hypotheses, we collected similarity ratings on word pairs from the WS-353 data-set. We randomly selected 102 word pairs from the WS-353 data-set. We collected similarity ratings on the 102 word pairs through Amazon Mechanical Turk (MTurk). We collected 5 dataset for these 102 pairs. Each collection used a different task design and was separated into two batches of 51 words each. Each batch received ratings from 50 unique contributors so that each pair of word received 50 ratings in each condition.

The way the questions were asked to the crowd workers are shown in the following figure. For each question, 4 conditions were differently framed. The first two of these are “How is X similar to Y?” (sim) and “How is Y similar to X?” (inverted-sim).  We further repeated them asking for the difference between both words (dissim and inverted-dissim, respectively). Since the scale is reversed in dissim and inverted-dissim, the dissimilarity ratings were converted into similarity ratings for comparison.

The different ways of framing each question.
The different ways of framing each question.

We compared the distributions of similarity ratings in the original WS-353 dataset and our dataset in order to confirm the framing effect. The mean values of 50 ratings were calculated for each pair in our dataset to compare with original similarity ratings in the WS-353 dataset. We filtered exactly the same 102 word pairs from the WS-353 to ensure the consistency between two settings. The distributions are found to be significantly different (p < 0.001, paired t-test).

Our preliminary results show that similarity ratings for some word pairs in the WS-353 dataset do not follow a normal distribution. Some of the distributions reveal that there are different perceptions of similarity, which gets highlighted by multiple peaks. A possible explanation is that the lower peak can be attributed to individuals that are aware of the factual differences between a “sun” or “star” and an actual planet orbiting a “star” while the others are not aware of it.

We compared the difference between the similarity ratings of sim (dssim) and that of inverted-sim (inverted-dissim) to verify third hypothesis. Scatter plot representations of similarity ratings in different word orders for the similarity question and the dissimilarity question reflect that the semantic relatedness in different orders do not take same mean values, indicating the semantic relatedness is asymmetric. The asymmetric relationship consistently appears in the different types of questions (i.e., similarity and dissimilarity.) The results show a remarkable difference between the similarity of “baby” to “mother” and the similarity of “mother” to “baby”. This indicates that the asymmetric relationship between mother and baby was reflected in the subjective similarity rating.

To measure the inter-rater reliability, we have computed the value of Krippendorff’s alpha for both the original dataset and for the one we obtained through the current analysis. Krippendorff’s alpha is a statistical measure that basically provides a highlight of the agreement achieved when encoding a set of units of analysis in terms of the values of a variable.

 

References

[1] L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin. Placing search in context: The concept revisited. ACM Transactions on Information Systems, 20(1), 2002.

 

For more, see our full paper, Possible Confounds in Word-based Semantic Similarity Test Data, accepted in CSCW 2017.

Malay Bhattacharyya
Department of Information Technology
Indian Institute of Engineering Science and Technology,
Shibpur
malaybhattacharyya@it.iiests.ac.in

Yoshihiko Suhara
MIT Media Lab, Recruit Institute of Technology
suharay@recruit.ai

Md Mustafizur Rahman
Information Retrieval & Crowdsourcing Lab
University of Texas at Austin
nahid@utexas.edu

Markus Krause
ICSI, UC Berkeley
markus@icsi.berkeley.edu

CrowdCamp Report: Protecting Humans – Worker-Owned Cooperative Models for Training AI Systems

Artificial intelligence is widely expected to reduce the need for human labor in a variety of sectors [2]. Workers on virtual labor marketplaces unknowingly accelerate this process by generating training data for artificial intelligence systems, putting themselves out of a job.

Models such as Universal Basic Income [4] have been proposed to deal with the potential fallout of job loss due to AI. We propose a new model where workers earn ownership of the AI systems they help to train, allowing them to draw a long-term royalty from a tool that replaces their labor [3]. We discuss four central questions:

  1. How should we design the ownership relationship between workers and the AI system?
  2. How can teams of workers find and market AI systems worth building?
  3. How can workers fairly divide earnings from a model trained by multiple people?
  4. Do workers want to invest in AI systems they train?

Crowd Workers gain ownership shares in the AI they help train and reap long-term monetary gains, while Requesters can avail of lower initial training costs.

AI Systems Co-owned by Workers and Requesters

  • Current model (requester-owned): Under the terms of platforms like Amazon Mechanical Turk [1], the data produced (and trained AI systems that result) are owned entirely by requesters in exchange for a fixed price paid to workers for producing that data.
  • Proposed model (worker-owned): In a cooperative model for training AI systems, workers can choose to accept a fraction of that price in exchange for shares of ownership in the resulting trained system (smaller fractions = increased ownership). We can imagine interested outside investors (or even workers themselves) participating in such co-ops as well, bankrolling particular projects that have a significant chance of success.

Finding and Marketing AI Systems

  • Bounties vs. marketplaces: Platforms like Kaggle and Algorithmia allow interested parties to post a bounty (reward) for a trained AI system. Risks under this model include (1) the poster may not accept their solution, (2) the poster may choose another submission over their solution, or (3) the open call may expire. Alternately, Algorithmia also provides a marketplace enabling AI systems to earn money on a per-use basis. Risks here include identifying valuable problem domains with high earning potential.
  • Online vs. offline training models: In an online payment model, workers can provide answers initially and as the AI gains confidence in its predictions, work starts shifting from the crowd to the AI. In an offline payment model, the model can be marketed once it achieves sufficiently accurate predictions, or workers could market a dataset rather than a fully-trained AI system.

Fairly Dividing Earnings from AI Systems

  • Assigning credit: How to optimally assign credit for individual training examples is an open theoretical question. We see the opportunity for both model-specific and black-box solutions.
  • Measuring improvement: Measuring improvement to worker owned and trained AI systems will require methods that incentivize workers to provide the most useful examples, not simply ones that they may have gathered for a test set.
  • Example selection: Training examples could be selected by the AI system (active learning) or by workers. What are fair payment schemes for various kinds of mixed-initiative systems?
  • Data maintenance: Data may become stale over time, or change usefulness. Should workers be responsible for maintaining data, and what are fair financial incentives?

 

Do Workers Want to Invest in AI Systems?

We launched a survey on Mechanical Turk (MTurk) to gauge interest, and got feedback from 31 workers.

  • On average, workers were willing to give up 25% of their income if given the chance to double it over one year. Only 3 participants said they’d not be willing to give up any of their earnings, and age doesn’t seem to be a factor here.
  • When given a risk factor, over 48% chose to give up some current payment for a future reward.
    hit-1dollar
  • In order to give up 100% of their current earnings, workers needed to be able to make back 3 times their invested amount.
  • 45% of workers reported not being worried at all about AI taking over their jobs.

ai-worried

References

[1] Amazon Mechanical Turk. 2014. Participation Agreement. Retrieved November 4, 2016 from https://www.mturk.com/mturk/conditionsofuse.

[2] Executive Office of the President National Science and Technology Council Committee on Technology. October 2016. Preparing for the Future of Artificial Intelligence.

[3] Anand Sriraman, Jonathan Bragg, Anand Kulkarni. 2016. Worker-Owned Cooperative Models for Training Artificial Intelligence. Under review.

[4] Wikipedia. Basic Income. https://en.m.wikipedia.org/wiki/Basic_income

Anand Sriraman, TCS Research – TRDDC, Pune, India
Jonathan Bragg, University of Washington, USA
Anand Kulkarni, University of California, Berkeley, USA

CrowdCamp 2016: Understanding the Human in the Loop

Report on CrowdCamp 2016: The 7th Workshop on Rapidly Iterating Crowd Ideas, held in conjunction with AAAI HCOMP 2016. Held November 3, 2016 in Austin, TX.

Organizers: Markus Krause (UC Berkeley), Praveen Paritosh (Google), and Adam Tauman Kalai (Microsoft Research)giphy

Human computation and crowdsourcing as a field investigates aspects of the human in the loop. Consequently, we should use metaphors of computer science to describe human phenomena. These phenomena have been studied by other fields such as sociology and psychology for a very long time. Ignoring these fields not only blocks our access to valuable information but also results in simplified models we try to satisfy with artificial intelligence.

We focused this Crowdcamp on methodologically recognizing the human in the loop, by paying more attention to human factors in task design, and borrowing methodologies from scientific fields relying on human instruments, such as survey design, psychology, and sociology.

We believe that this is necessary for and will foster: 1) raising the bar for AI research, by facilitating more natural human datasets that capture the human intelligence phenomena more richly, 2) raising the bar for human computation methodology for collecting data using/via human instruments, and 3) improve the quality of life and unleashing the potential of crowdworkers by taking into consideration human cognitive, behavioral, and social factors.

This year’s Crowdcamp featured some new concepts. Beside of having a theme we also hold a pre workshop social event. The idea of the event was to get together and discuss ideas in an informal and cheerful setting. We found this very helpful to break the ice, form groups, and prepare ideas for the camp. It helped to keep us focused on the tasks without sacrificing social interactions.

We think the pre workshop social event really helped inspiring participants to get to work right away the next day. We are aware of at least one submitted work in progress paper 24 hours after the workshop! We are sure there are even more great results in the individual group reports published on this blog.

We expect to publish all of the data sets we collected in the next week or so, so please check back in a few days to see more of the results of our workshop. A forthcoming issue of AAAI magazine will include an extended version of this report. If you have feedback on the theme of this year’s CrowdCamp, you might find some further points in there to ruminate about. Feel free to share feedback directly or by commenting on this blog post.

Thanks to the many awesome teams that participated in this year’s CrowdCamp, and stay tuned as blog posts from each team describing their particular project will immediately follow this workshop overview post in the coming days.