Community-Based Bayesian Aggregation Models for Crowdsourcing

A typical crowdsourcing classification scenario is where we wish to classify a number of items based on a set of noisy or biased labels that were provided by multiple crowd workers with varying levels of expertise, skills and attitudes. To obtain the set of accurate aggregated labels, we must be able to assess the accuracies and biases of each worker who contributed labels. Ultimately, these estimates of the workers’ accuracy should be integrated within the process that infers the items’ true labels.

Prior work on the data aggregation problem in crowdsourcing led to an expressive representation of a worker’s accuracy in the form of a latent worker confusion matrix. This matrix expresses the probability of each possible labelling outcome for a specific worker conditioned on each possible true label of an item. This matrix reflects the labelling behaviour of a given user, who may, for example, be biased towards a particular label range. See the example below for a classification task with three label classes (-1,0,1).

Bad workerGood worker





In CommunityBCC, we make a further modelling step by adding a latent worker type variable, which we call community. Communities  represent similarity patterns among the workers’ confusion matrices. Thus, we assume that the workers’ confusion matrices are not completely random, but rather that they tend follow some underlying clustering patterns – such patterns are readily observable by plotting the confusion matrices of workers as learned by BCC. See this example from a dataset with three-point scale labels (-1, 0, 1):


The CommunityBCC model is designed to encode the assumptions that (i) the crowd is composed by an unknown number of communities, (ii) each worker belongs to one of these communities and (iii) each worker’s confusion matrix is a noisy copy of their community’s confusion matrix. The factor graph of the model is shown below and the full generative process is described in the paper (details below).


How to find the number of communities
For a given dataset, we can find the optimal number of communities using standard model selection. In particular, we can perform a model evidence search over a range of community counts. So, if we assume that the community count lies within a range of 1..x communities, we can run CommunityBCC by looping over this range and compute the model evidence of each community count. This computation can be done efficiently using approximate inference using message passing. For an example, take a look at computing model evidence for model selection using the Infer.NET probabilistic programming framework here.

We tested our CommunityBCC model on four different crowdsourced datasets and our results show that it provides a number of advantages over BCC, Majority Voting (MV) and Dawid and Skene’s Expected Maximization (EM) method.

  • CommunityBCC converges faster to the highest classification accuracy using less labels. See the figure below where we iteratively select labels for each dataset.
  • The model provides useful information about the number of latent worker communities. See the figure below showing the communities and the percentage of workers estimated by CommunityBCC in each of the four datasets.CBCCCommunities

To learn more about Community-Based Bayesian Aggregation Models for Crowdsourcing, take a look at the paper:

Matteo Venanzi, John Guiver, Gabriella Kazai, Pushmeet Kohli, and Milad Shokouhi, Community-Based Bayesian Aggregation Models for Crowdsourcing, in Proceedings of the 23rd International World Wide Web Conference, WWW2014, Best paper runner up, ACM, April 2014

Full code for this model
The full C# implementation of this model is described in this post where you can download and try out its Infer.NET code. You are welcome to experiment with the model and provide feedback.

Matteo Venanzi, University of Southampton
John Guiver, Microsoft
Gabriella Kazai, Microsoft
Pushmeet Kohli, Microsoft
Milad Shokouhi, Microsoft










More than Liking and Bookmarking? Towards Understanding Twitter Favouriting Behaviour

Twitter is a widely used micro-blogging platform that offers its users a variety of different features to engage with contacts in their social network and the content they produce. One of this features is the favouriting function a small, star-shaped icon displayed on the bottom of every tweet.


The usage of favouriting has strongly increased over the years, but in contrast to other Twitter features, such as retweeting or hashtags, favouriting has not been, to date, the focus of any rigorous scientific investigation.

Our work presents an initial study of favouriting behaviour. In particular we focus on the motivations people have for favouriting a tweet. We approach this question via a large-scale survey, which queried 606 Twitter users on the frequency with which they exhibit particular behaviours, including how often they make use of favourite button. Moreover two free form questions asked users about the reasons why they use this function and what they hope to achieve when doing so.

Interestingly only 65% (395 participants) of our respondents reported knowing about the favouriting feature. On the one hand, 26.8% of these participants stated to never favourite a tweet. On the other hand, 36.1% reported favouriting regularly, 5% of participants even reported doing so multiple times per day.


The main result of our study is a coding scheme or classification of 25 heterogenous reasons for using the favouriting feature. The table below shows the complete coding scheme along with frequency information, detailing how often each code appeared in the participants’ answers.


Our findings show that motivations behind favouriting can be grouped into two major use cases:

  • (A) favouriting is used as a response or reaction to the tweet or its metadata, e.g., by liking it [A3]. Another prominent example is the ego favouriter [A4.2], who favourites a tweet, when he or she is mentioned in it.
  • (B) favouriting is used for a specific purpose or to fulfill a function, e.g. by bookmarking [B1] it in the favourites list. Another example would be agreeing with the author [B2.1], which can be interpreted as a digital fist bump or nod, as form of unwritten communication [B2].

All in all we can see that the favouriting feature is overly re-purposed, revealing unsupported user needs and interesting behaviour.

For a more detailed explanation of codes  and example statements see our full paper, More than Liking and Bookmarking? Towards Understanding Twitter Favouriting Behaviour.

Florian Meier, Chair for Information Science, University of Regensburg, Germany
David Elsweiler,Chair for Information Science, University of Regensburg, Germany
Max L. Wilson, Mixed Reality Lab, University of Nottingham, United Kingdom

Social influence in not-so-social media: Linguistic style in online reviews

Language is not only the means through which we express our thoughts and opinions, it also conveys a great deal of social information about ourselves and our relationships to others. Linguistic accommodation is often observed in face-to-face and technology-mediated encounters.

The social identity approach is typically invoked to explain such phenomena: we adjust our language patterns in order to be more in sync with the patterns of others with whom we identify. What happens though, in a social medium that isn’t really all that social? Do we still observe evidence of influence on participants’ linguistic style?

We studied reviewers’ language patterns at TripAdvisor review forums, where there is no direct interaction between participants. We identified several stylistic features that deviate from the medium’s “house style,” in the sense that their use is very rare, for example:

  • Second person voice (only 7% of reviews in our data set incorporate this feature)
  • Emoticons (3%)
  • Markers of British vocabulary (3%)

We examined the hypothesis that reviewers are more likely to incorporate unusual features in their reviews when they are exposed to them in their local context (i.e., the preceding reviews submitted on the same attraction). Our hypothesis was supported for most of the features we examined.

For instance, the figure below shows the probability of a reviewer writing in the second person voice as a function of increasing exposure to this feature. Specifically, the horizontal axis shows the proportion of the 7 immediately preceding reviews manifesting the feature; the vertical axis is the proportion of current reviews incorporating the feature, given the extent of exposure. It is clear that with increasing exposure to the unusual feature, the reviewer is more likely to deviate from the general “house style,” and follow suit with the previous reviews. In fact, beyond a given level of exposure, it becomes almost certain that the current review will also manifest the rare feature.


Our paper presents experiments on 12 such linguistic features, and offers preliminary evidence that even in the absence of direct, repeated interaction between social media participants, linguistic accommodation can occur. Thus, herding behaviors in language may come about through the process of reading and writing alone.

Audience design offers a possible explanation for our observations. It may be that due to the lack of direct interaction at TripAdvisor, participants form a perception of their audience based primarily on the previously contributed reviews, adjusting their writing style accordingly. This explanation resonates with recent work on the particular properties of social media audiences (e.g., the imagined audience and context collapse.)

However, further work must tease out the possible influence of external factors, such as attraction-specific or seasonal characteristics. The present work establishes a correlation between local context and the use of linguistic features, but not necessarily a clear-cut causal relationship.

Michael, Loizos, AND Otterbacher, Jahna. “Write Like I Write: Herding in the Language of Online Reviews” International AAAI Conference on Weblogs and Social Media 2014. Available at:

Information Overload in Social Media and its Impact on Social Contagion

Since Alvin Toffler popularized the term “Information overload” in his bestselling 1970 book Future Shock, it has become ubiquitous in modern society. The advent of social media and online social networking has led to a dramatic increase in the amount of information a user is exposed to, greatly increasing the chances of the user experiencing an information overload. Surveys show that two thirds of Twitter users have felt that they receive too many posts, and over half of Twitter users have felt the need for a tool to filter out the irrelevant posts.

Our goal is to quantitatively characterize the phenomenon of information overload and its impact on information propagation in a social network. To this end, we perform a large-scale quantitative study of information overload experienced by users in Twitter. The key insight that enables our study is that users’ information processing behaviors can be reverse engineered through a careful analysis of the times when they receive a piece of information and when they choose to forward it to other users.

We found several insights that not only reveal the extent to which users in social media are overloaded with information, but also help us in understanding how information overload influences users’ decisions to forward and disseminate information to other users:

  • We find empirical evidence of a limit on the amount of information a Twitter user produces per day; very few Twitter users produce more than ∼40 tweets/day.
  • We find no limit on the information received by Twitter users; many Twitter users follow several hundreds to thousands of other
  • We find a threshold rate of incoming information (∼30 tweets/hour), below which the probability that a user forwards any received tweet holds nearly constant, but above which the probability that a user forwards any received tweet begins to drop substantially (Figure below). We argue that the threshold rate roughly approximates the limit on information processing capacity of users and it allows us to identify overloaded users.twitter-users-2009-07-01-2009-10-01-final-tweet-in-flow-probability-retweet
  • We observe that if a user is overloaded, the higher the rate at which she receives information, the longer the time she takes to process and forward the information. Further, overloaded users tend to prioritize tweets from a subset of sources.

For more details, see our full paper Quantifying Information Overload in Social Media and its Impact on Social Contagions.

Manuel Gomez-Rodriguez, MPI for Intelligent Systems and MPI for Software Systems
Krishna Gummadi, MPI for Software Systems
Bernhard Schölkopf, MPI for Intelligent Systems

Methodological Debate: How much to pay Indian and US citizens on MTurk?

This is a broadcast search request (hopefully of interest to many readers of the blog), not the presentation of research results.

When conducting research on Amazon Mechnical Turk (MTurk) you always face the question how much to pay workers. You want to be fair, to incentivize diligent work, to expedite recruiting, to sample a somehow representative cross-section of Turkers etc. For the US, I generally aim at $7.50 per hour, slightly more than the minimum wage in the US (although that is non-binding) and presumably slightly higher than the average wage on MTurk. Now I aim for a cross-cultural study comparing survey responses and experiment behavior of Turkers registered as residing in India with US workers. How much to pay in the US, how much in India? For the US it is easy: $7.50 * (expected duration of the HIT in minutes / 60). And India?

The two obvious alternatives are

  1. Pay the same for Indian workers as US workers: $7.50 per hour. MTurk is a global market place in which workers from many nations compete. It’s only fair to pay the same rate for the same work.
  2. Adjust the wage to national price level: ~ $2.50 per hour. A dollar is worth more in the US than in India. Paying the same rate leads to higher incentives for Indian workers and might bias sampling, effort, and results. According to The World Bank, the purchasing power parity conversion factor to market exchange ratio for India compared to the US is 0.3 ( $7.50 in the US would make $2.25 in India. Based on The Economist’s BigMac index one could argue for $2.49 in India (raw index) to $4.5 (adjusted index; According to (Ashenfelter 2012, wages in McDonald’s restaurants in India are 6% of the wage at a McDonald’s restaurant in the US, which could translate to paying $0.45 per hour on MTurk. Given the wide range of estimates, $2.50 might be a reasonable value.

What should be the criteria to decide and which of these two is better?

I appreciate any comments and suggestions and hope that these will be valuable to me and to other readers of Follow the Crowd.

CrisisLex: Efficiently Collecting and Filtering Tweets in Crises

Timely location of useful information during crises is critical for those forced to make life altering decisions. To stay informed, emergency responders and affected individuals increasingly rely on social media platforms, specifically on Twitter. To obtain relevant information they typically use one of the two main strategies to query Twitter:


  • Keyword-based sampling: Track the tweets that contain a set of manually identified keywords or hashtags specific to a crisis, such as #sandy for Hurricane Sandy or #bostonbombings. Yet, keywords are only as responsive as the humans curating them and, indeed, in our data, such searches returned only a fraction of the relevant tweets—only 18% to 45% of the crisis-relevant tweets were retrieved, with an average of ~33%.
  • Geo-based sampling: Track the tweets that are geo-tagged in the area of the disaster. Alas, by doing so out of the returned tweets only a small percentage are actually about the disaster—only 6% to 26% from the returned tweets are crisis-relevant, with an average of ~12.5%.

Efficiently collecting crisis-relevant information from Twitter is challenging due to the laconic language and the Twitter’s API for accessing tweets in real-time (the streaming API) limitations. Twitter can be queried by content, through the use of up to 400 keywords, or by geo-location. Specifically, if both keywords and geo-locations are given the query is interpreted as a disjunction (logical OR) of both. This is undesirable, as the public API gives access to only 1% of the data, and if the query matches on more data than that, it will return a random sample from it. Thus, as the query becomes more broad, after some point we start losing data.

To overcome these limitations, we built CrisisLex—a lexicon of terms that frequently appear in tweets posted during a variety of crises. By querying Twitter using CrisisLex, we obtain better trade-offs between how much relevant data we retrieve and how clean that data is. The lexicon contains terms such as:

  • damage
  • affected people
  • people displaced
  • donate blood
  • text redcross
  • stay safe
  • crisis deepens
  • evacuated
  • toll raises

CrisisLex has two main applications:

  • Increase the recall in the sampling of crisis-related messages (particularly at the start of the event), without incurring a significant loss in terms of precision.
  • Automatically learn the terms used to describe a new crisis and adapt the query with them.

Consequently, CrisisLex requires no manual intervention to define or adapt the query. This is particular useful, as the manual identification of keywords requires time which, in turn, may result in losing tweets due to latency. In addition, using CrisisLex does not only retrieve more comprehensive sets of crisis-relevant tweets, but it also helps to preserve the original distribution of message types and message sources.

For more detailed results on how we build and tested CrisisLex please check our paper: CrisisLex: A Lexicon for Collecting and Filtering Microblogged Communications in Crises. If you want to use CrisisLex to collect tweets, and/or want to build your own lexicon for other domains (e.g., health, politics, sports) please check our code and data (in accordance with the terms of service of Twitter’s API) at

Alexandra Olteanu, École Polytechnique Fédérale de Lausanne
Carlos Castillo, Qatar Computing Research Institute
Fernando Diaz, Microsoft Research
Sarah Vieweg, Qatar Computing Research Institute

Improving recommendation by directing the crowd’s attention

We are drowning in content. On YouTube alone, over 100 hours of video are uploaded every minute. Which of them are worth watching? Which of the thousands of news stories and discussions on Reddit are worth reading? Which Kickstarter projects are worth funding? To identify quality items, content providers aggregate opinions of many, for example by asking people to recommend interesting items, and prominently feature highly-rated content. In practice, however, peer recommendation often creates “winner-take-all” and “irrational herding” behaviors with inconsistent, biased and unpredictable outcomes in which items of similar quality end up with wildly different ratings.

Researchers from USC Information Sciences Institute and Institute for Molecular Manufacturing demonstrated that it is possible to overcome these limitations to improve the ability of crowds to identify interesting content. Due to human cognitive biases, people pay far more attention to items appearing at the top of a web page than those in lower positions. Hence, the presentation order strongly affects how people allocate attention to the available content. Using Amazon Mechanical Turk, researchers demonstrated that they can manipulate the crowd’s attention through the presentation order of items to improve peer recommendation. Specifically, the common strategy of ordering items by ratings does not accurately estimate their quality, since small early differences in ratings become amplified as people focus attention on the same set of highest-rated items.  This “rich-get-richer” effect occurs even when the ratings are not explicitly shown, but are simply used to order the items.

In contrast, ordering items by the recency of rating, much like a Twitter stream with the most recently retweeted posts at the top of the stream, leads to more robust estimates of their underlying quality and also produces less variable, more predictable outcomes. Ordering items by recency of rating is also a good choice for time critical domains, where novelty is a factor, since continuously moving items to the top of the list can rapidly bring newer items to crowd’s attention.


By judiciously exposing information about the preferences of others, for example, by changing the presentation order, content providers can better leverage the “wisdom of crowds” to accelerate the discovery of quality content.

Lerman K, Hogg T (2014) Leveraging Position Bias to Improve Peer Recommendation. PLoS ONE 9(6): e98914. doi:10.1371/journal.pone.0098914


Kristina Lerman, USC Information Sciences Institute

Tad Hogg, Institute for Molecular Manufacturing

How Your Digital Footprints Reveal the Events You will Want to Attend

What drives the choice of social media users to attend certain events rather than others? The answer to this question finds vital applications in personalized event recommendations. Yet, it has only been recently that the avalanche of user generated content from location-based social services truly allows the exploration of this aspect at a relevant scale.

User check-in activity heatmap of London in the days before (left) and during (middle) the UEFA Champions League Final. Darker shaded regions denote a higher number of check-ins closer to the observed maximum among all regions during the same day. The size of the location markers in the rightmost figure is proportional to the number of check-ins at the place. Notice the significantly increased activity at the Wembley area in Northwestern London on the 28th of May 2011 when the UEFA football match was held (middle, right).
User check-in activity heatmap of London in the days before (left) and during (middle) the UEFA Champions League Final. Darker shaded regions denote a higher number of check-ins closer to the observed maximum among all regions during the same day. Notice the significantly increased activity at the Wembley area in Northwestern London on the 28th of May 2011 when the UEFA football match was held (middle, right).

In this work we take advantage of the location broadcasts of Foursquare users to study the social and behavioral underpinnings of event participation in three metropolitan cities – London, New York and Chicago. The main challenge we address is: what is the extent to which temporal, spatial, and social factors influence user’s decision to visit one event over another?

Word clouds of the words used in the names of the places and place types for several events where Foursquare users check in: (a)-(b) London, (c)-(d) New York, (e)-(f) Chicago
Word clouds of the words used in the names of the places and place types for several events where Foursquare users check in: (a)-(b) London, (c)-(d) New York, (e)-(f) Chicago.

Not surprisingly, we confirm that social factors in their various manifestations are the dominant players when it comes to event preferences.

  • Event popularity, which can be related to forces of social contagion, dominates the factors in London. We find that there are a few massively popular events in cities such as the Royal Wedding in London where more participants are lured to the crowd by forces reminiscent of gregariousness and preferential attachment.
  • An explicit social filtering that checks whether friends are visiting the event tops the results in New York and Chicago. This complementary finding highlights even more the social nature of events and the gravitational aspect of friendship. If your friends are attending an event, with a high likelihood you will be joining them as a part of a social group.
  • The friends’ visited place types such as bars, theaters or stadiums and the associated activities with them are also indicative of the users’ event preferences. We model this assumption by computing attraction scores towards events in a socio-spatial graph that connects users, place types and events. The modelling proves especially suitable to recommend niche content, i.e. events which are more appealing to a specific group rather than the general audience.

For more, see our full paper, The Call of the Crowd: Event Participation in Location-based Social Services.
Petko Georgiev, University of Cambridge
Anastasios Noulas, University of Cambridge
Cecilia Mascolo, University of Cambridge

The good, the nerd or the beautiful: who should I choose to work with me?

During our lives, we perform collaborative tasks in a wide and diverse range of activities, such as selecting students to participate in a school project, hiring employees to a company or picking up players for a football friendly match.

Given this context, we ask: what factors influence such decisions, i.e., what factors are determinant for selecting/repelling someone for a given collaborative task?


Without much thought, one could answer this fundamental question by saying that the skill of a person to do the task determines if she/he will be selected for a collaboration. Although we agree proficiency definitely plays an important role in the decision, we again ask: is proficiency the only determinant factor? If not, is proficiency even the main factor?

From a very careful an particular experiment conducted in a classroom of undergrad students, we mixed data from an offline questionnaire with Facebook data to reveal a number of interesting and sometimes surprising findings:

  • the most skilled students were not always preferred;

  • a number of social features extracted from Facebook (see table bellow), such as the strength of the friendship, the popularity of the individual on Facebook, if she is extrovert, and her similarity with other students, are more informative than the grades to determine the willingness of students to work together.


Our findings show:

  • the importance of building up a wide and diverse personal profile when the aim is to be selected for a given collaborative task;

  • that online social network data can indicate if two individuals would like or not to work together and, as it is well know, social chemistry is desirable for achieving maximum performance of a team;

  • a potential to leverage several online applications, such as team and collaboration recommendation systems that highlight potential fruitful collaborations and hide collaborations between potential conflictual relationships.

Douglas D. Castilho, Universidade Federal de Minas Gerais, Brazil

Pedro O.S. Vaz de Melo, Universidade Federal de Minas Gerais, Brazil

Daniele Quercia, Yahoo! Labs, Barcelona

Fabricio Benevenuto, Universidade Federal de Minas Gerais, Brazil


The Tweets They are a-Changin’: Evolution of Twitter Users and Behavior

Over the years, we have seen significant amounts of research on Twitter, due to the ease of access to large amounts of data. However, most studies typically focus on data from small period of time, generally ranging from a few weeks to a few months. Given that Twitter has evolved significantly since its founding in 2006, this situation makes it hard to interpret prior results or make projections of where Twitter is headed.

Our work aims to quantify the evolution of Twitter itself, focusing on the public Twitter ecosystem. There are two main contributions of our work: First, we collect a dataset of over 37 billion tweets spanning over seven years. Second, we quantify how the users, their behavior, and the site as a whole have evolved. Below, we highlight a few of our results; the paper contains many more results as well as details on the datasets that we use.


  • While Twitter has grown significantly, it has also seen a large number of users leave the platform. Today, we see that almost 33% of the user population is inactive, over 6% has been suspended, and 2% of users have deleted their accounts.


  • We observe Twitter spreading over the globe; the fraction of tweets from the U.S. and Canada has dropped from over 80% to 32% today. Additionally, there has been a massive increase in the diversity of languages used on the platform. The figure above shows this evolution for both user-provided locations and tweet geo-tags.


  • We can quantify the rise of malicious activity on Twitter, including both follower spam (we see a massive increase in follower counts in 2011 and 2012) and trending-topic hashtag spam (we see a spike in tweets with many hashtags in 2009).


  • We can observe users quickly adopting platform enhancements by Twitter. Before Twitter introduced native retweets, only 5% of tweets were retweets; today, it is over 27%.


  • Twitter has shifted from a primarily-mobile system (based on SMS) to a primarily-desktop system (based on the web site) and back to a primarily-mobile system (based on smartphone apps). Today, over 50% of tweets come from mobile devices.

We hope that our findings will help researchers to better understand the Twitter platform and to more clearly interpret prior results. We make all of our analysis available to the research community (to the extent allowed by Twitter’s Terms of Service) at

Yabing Liu, Northeastern University
Chloe Kliman-Silver, Brown University
Alan Mislove, Northeastern University