Analyzing naturally crowdsourced instructions at scale

“The proportion of ingredients is important, but the final result is also a matter of how you put them together,” said Alain Ducasse, one of only two chefs in the world with 21 Michelin stars. In fact, for cooking professionals like chefs, cooking journalists, and culinary students, it is important to understand not only the culinary characteristics of the ingredients but also the diverse cooking processes. For example, categorizing different approaches to cooking a dish or identifying usage patterns of particular ingredients and cooking methods are crucial tasks for them.

However, these analysis tasks require extensive browsing and comparison which is very demanding. Why? It’s because there are thousands of recipes available online even for something seemingly as simple as making a chocolate chip cookie.

screen-shot-2018-04-12-at-01-17-22Figure 1: search results for “chocolate chip cookie” in leading recipe websites.

In essence, these recipes are naturally crowdsourced instructions for a shared goal, like making a chocolate chip cookie. They exist in diverse contexts (no oven, no gluten, from scratch, etc.), diverse level of details and lengths, different levels of required skills and expertise, and in different writing styles.

We devised a computational pipeline which a) constructs graphical representation which captures semantic structure for each recipe in natural language text using machine-assisted human computation techniques. b) compares structural and semantic similarities between every recipe.

On top of the pipeline, we built an analytics dashboard which invites cooking professionals to analyze a large collection of recipes for a single dish.

ss_dashboardFigure 2: RecipeScape is an interface for analyzing cooking processes at scale with three main visualization components: (a) RecipeMap provides clusters of recipes with respect to their structural similarities, where each point on the map is a clickable recipe, (b) RecipeDeck provides in-depth view and pairwise comparisons of recipes, (c) RecipeStat provides usage patterns of cooking actions and ingredients.

The interface aims to provide analysis on three different levels. It aims to a) provide statistical information of individual ingredients, cooking actions. b) provide analysis for structural comparison of recipes to examine the representative, outlier recipes. c) provide clusters of recipes to examine fundamental similarities and differences of the various approaches.

Our user study with 4 cooking professionals and 7 culinary students suggest that RecipeScape broadens browsing and analysis capabilities. For example, users have reported RecipeScape enables explorations of both common and exotic recipes, comparisons of substitute ingredients and cooking actions, discovers fundamentally different approaches to cooking a dish and assists imagination and mental simulations of diverse cooking processes.

Our approach is not only bounded to cooking but is applicable to other how-tos like software tutorials, makeup instructions, furniture assemblies and more.

This work was presented at CHI 2018 in Montreal as “RecipeScape: An Interactive Tool for Analyzing Cooking Instructions at Scale”. For more detail, please visit our project website, https://recipescape.kixlab.org/

 

 

ConceptScape: Collaborative Concept Mapping for Video Learning

While video has become a widely adopted medium for online education, existing interface and interaction designs provide limited support for content navigation and learning. To support concept-driven navigation and comprehension of lecture videos, we present ConceptScape, a system that uses interactive concept maps to enable concept-oriented learning with lecture videos for learners. Initial results from our evaluation of a prototype show that watching a lecture video with an interactive concept map can support comprehension in learning process, prompt more reflection afterward and provide a shortcut to refer back to a specific section.

But how do we generate interactive concept maps for numerous online lecture videos at scale? We designed a crowdsourcing workflow to capture multiple workers’ common understanding of a lecture video and represent workers’ understandings as an interactive concept map for future learners. The main challenge we are tackling here is to elicit workers’ individual reflections while guiding them to reach consensus on components of a concept map.

ConceptScape’s crowdsourcing workflow includes three stages with eight detailed steps. The first stage is Concept and Timestamp Generation that includes three steps, namely finding concepts, pruning concepts, adjusting time stamps. The second stage is Concept Linking includes three steps, which are linking concepts, supplement links, and pruning links. The last stage is Link Labeling includes two steps that are nominating labels and voting.

Our crowdsourcing workflow consists of three main stages, each of which reflects the three key cognitive activities in concept map construction: listing concepts, linking concepts, and explaining relationships. Stages are further divided into steps with different instructions in order to guide the workers to focus on specific activities in the concept mapping process. Overall, our key design choices are:

  • Each stage is designed to yield different types of output, and within a stage, multiple steps are added for quality control.
  • Each stage has a unique interface and instruction designed to collect specific components of the concept map.
  • In each step, workers contribute in parallel (for efficiency) while our aggregation algorithm maintains sequential step-transitions (for quality control).
  • A worker is guided to work on a specific micro concept mapping activity in a step (e.g., pruning duplicate concepts), but may choose to work on other concept mapping  activities as they see fit (e.g., adding more concepts or changing the timestamp).
  • While allowing flexible work in multiple concept mapping activities, we collect extra contributions from wider perspectives of workers; on the other hand, a more restrictive aggregation method is adopted to deal with extra contributions in later steps since we intend to converge the concept map.

To evaluate our approach of crowdsourcing concept maps, we recruited participants from Amazon’s Mechanical Turk to generate concept maps for three lecture videos and compared our results to expert-generated concept maps and ones generated by individual novices. We evaluated:

  • The holistic quality of concept maps: Third-party evaluators, blinded to experimental conditions, rated in a 1-10 scale to indicate the overall quality of a concept map.
  • The component quality of concept maps: Evaluators scored three components, namely concepts, links, and link phrases separately, and then we summed up the three scores as a total score.

Our result shows that ConceptScape generated concept maps with quality comparable to expert-generated concept maps, in terms of both holistic evaluation and component evaluation. ConceptScape also generated concept maps with higher component-level quality than individual novices.

To see if task flexibility brings in value, we further looked into the amount of extra contributions from workers. We found that workers indeed contributed more than what they were assigned to do.

Beyond crowdsourcing interactive concept maps for educational goodness, our crowdsourcing workflow design may also help inform those who aim to crowdsource open-ended work that requires higher-order thinking, such as those that demand cognitive analysis and creativity.

For more, see our full paper, ConceptScape or video.
Ching Liu, NTHU
Juho Kim, KAIST
Hao-chuan Wang, UC Davis