Report: CMO-BIRS Workshop on Models and Algorithms for Crowds and Networks

The Banff International Research Station (BIRS) along with the Casa Matemática Oaxaca (CMO) generously sponsored a 4-day workshop on Models and Algorithms for Crowds and Networks, which was held in Oaxaca, Mexico from August 29 to September 1, 2016. It was a stimulating week of tutorials, conversations, and research meetings in a tranquil environment, free of one’s routine daily responsibilities. Our goal was to help find common ground and research directions across the multiple subfields of computer science that all use crowds, including making a connection between crowds and networks.


More than a year ago, Elisa Celis, Panos Ipeirotis, Dan Weld and myself, Yiling Chen, proposed the workshop to BIRS. It was accepted in September 2015. Lydia Chilton later joined us and provided incredible insights and leadership on running Research-a-Thon at the workshop. Twenty eight researchers from North America, India and Europe attended the workshop. We mingled, exchanged ideas and perspectives, and worked together closely during the week.

The workshop featured nine excellent high-level, tutorial-style talks spanning many areas of computer science topics related to models, crowds, and networks:

  • Auction theory for crowds,
  • Design, crowds and markets,
  • Random walk and network properties,
  • Real-time crowdsourcing,
  • Decision making at scale: a practical perspective,
  • The collaboration and communication networks within the crowd,
  • Mining large-scale networks,
  • Crowd-powered data management, and
  • Bandits in crowdsourcing.

Several of these videos are now available, so take a look if you get a chance.

Outside of talks, much of our time was spent in small groups participating in Research-a-Thon similar to CrowdCamp, a crowdsourcing hack-a-thon run at several HCI conferences. People teamed up and worked on their chosen projects over a period of two and a half days. It was my first experience of a Research-a-Thon and I was totally sold. It worked as follows:

  • Each participant gave a brief pitch of two project ideas.
  • The group did an open brainstorming session and “speed dating” for exchanging project ideas.
  • Six teams were formed and set off to explore their respective problems.
  • At the end of the Research-a-Thon, teams came back and shared their progress.

The groups were able to make productive use of their 3 days: one formalized a social sampling model and proved initial results on the group-level behavior, another had a prototype where a user can ask the crowd to search information in their emails while preserving the privacy of the content, and another had already launched their MTurk experiment. (I wish I could be this productive all the time!) In the next few blog posts, several of the teams will each share their findings of the Research-a-Thon with readers of this blog.

Following a CCC workshop on Mathematical Foundations for Social Computing, we also had a visioning and long-term future discussion at the workshop. Participants collectively identified the following five directions or problems that are believed to be important for the healthy growth of the field:

  • Identifying a quantifiable grand challenge problem. Identifying and running a grand challenge can be one of the best ways to push the frontier of research on crowds and networks.
  • Comparisons, benchmarks and reproducibility. It’s been difficult to make comparisons of research results and hence difficult to know whether progresses have been made. This has led to the desire of having benchmarks for research comparisons as well as formal good-practice guidelines and ideas on how to increase reproducibility of research in this field.
  • Theory, guarantees and formal models. Participants recognized the benefits and challenges of developing formal models and theoretical guarantees for systems that involve humans. Some fields, such as economics, have had enormous success despite such challenges — one suggestion is to identify reachable goals towards formalizing models and theoretical approaches.
  • Human interpretable components. Many visions about joint human-machine systems require that humans are interchangeable blocks and map computational models to humans. More progresses can potentially be made if we change our perspective and try to make components of systems more interpretable to humans.
  • The future of the crowd. The excellent The Future of Crowd Work paper published in 2013 continues to represent the concerns about and promises of crowd work.

The next blog post is from the Crowdsorcery team, discussing their Research-a-Thon project. Stay tuned!