CrowdCamp 2016: Understanding the Human in the Loop

Report on CrowdCamp 2016: The 7th Workshop on Rapidly Iterating Crowd Ideas, held in conjunction with AAAI HCOMP 2016. Held November 3, 2016 in Austin, TX.

Organizers: Markus Krause (UC Berkeley), Praveen Paritosh (Google), and Adam Tauman Kalai (Microsoft Research)giphy

Human computation and crowdsourcing as a field investigates aspects of the human in the loop. Consequently, we should use metaphors of computer science to describe human phenomena. These phenomena have been studied by other fields such as sociology and psychology for a very long time. Ignoring these fields not only blocks our access to valuable information but also results in simplified models we try to satisfy with artificial intelligence.

We focused this Crowdcamp on methodologically recognizing the human in the loop, by paying more attention to human factors in task design, and borrowing methodologies from scientific fields relying on human instruments, such as survey design, psychology, and sociology.

We believe that this is necessary for and will foster: 1) raising the bar for AI research, by facilitating more natural human datasets that capture the human intelligence phenomena more richly, 2) raising the bar for human computation methodology for collecting data using/via human instruments, and 3) improve the quality of life and unleashing the potential of crowdworkers by taking into consideration human cognitive, behavioral, and social factors.

This year’s Crowdcamp featured some new concepts. Beside of having a theme we also hold a pre workshop social event. The idea of the event was to get together and discuss ideas in an informal and cheerful setting. We found this very helpful to break the ice, form groups, and prepare ideas for the camp. It helped to keep us focused on the tasks without sacrificing social interactions.

We think the pre workshop social event really helped inspiring participants to get to work right away the next day. We are aware of at least one submitted work in progress paper 24 hours after the workshop! We are sure there are even more great results in the individual group reports published on this blog.

We expect to publish all of the data sets we collected in the next week or so, so please check back in a few days to see more of the results of our workshop. A forthcoming issue of AAAI magazine will include an extended version of this report. If you have feedback on the theme of this year’s CrowdCamp, you might find some further points in there to ruminate about. Feel free to share feedback directly or by commenting on this blog post.

Thanks to the many awesome teams that participated in this year’s CrowdCamp, and stay tuned as blog posts from each team describing their particular project will immediately follow this workshop overview post in the coming days.

About the author

Markus Krause

Markus is a visiting scholar at the UC Berkeley. In his research, he investigates the intersection of work and learning at scale. He is involved in projects exploring self-directed, self-organized, and personalized online education at scale. He also worked as a professional game designer and art-director in the gaming industry. Part of his research investigates learning as a core element of play. His Moocita project aims at providing monetary independence to students in alternative forms of education such as MOOCs and connect them to job opportunities online and offline.

View all posts