While video has become a widely adopted medium for online education, existing interface and interaction designs provide limited support for content navigation and learning. To support concept-driven navigation and comprehension of lecture videos, we present ConceptScape, a system that uses interactive concept maps to enable concept-oriented learning with lecture videos for learners. Initial results from our evaluation of a prototype show that watching a lecture video with an interactive concept map can support comprehension in learning process, prompt more reflection afterward and provide a shortcut to refer back to a specific section.
But how do we generate interactive concept maps for numerous online lecture videos at scale? We designed a crowdsourcing workflow to capture multiple workers’ common understanding of a lecture video and represent workers’ understandings as an interactive concept map for future learners. The main challenge we are tackling here is to elicit workers’ individual reflections while guiding them to reach consensus on components of a concept map.
Our crowdsourcing workflow consists of three main stages, each of which reflects the three key cognitive activities in concept map construction: listing concepts, linking concepts, and explaining relationships. Stages are further divided into steps with different instructions in order to guide the workers to focus on specific activities in the concept mapping process. Overall, our key design choices are:
- Each stage is designed to yield different types of output, and within a stage, multiple steps are added for quality control.
- Each stage has a unique interface and instruction designed to collect specific components of the concept map.
- In each step, workers contribute in parallel (for efficiency) while our aggregation algorithm maintains sequential step-transitions (for quality control).
- A worker is guided to work on a specific micro concept mapping activity in a step (e.g., pruning duplicate concepts), but may choose to work on other concept mapping activities as they see fit (e.g., adding more concepts or changing the timestamp).
- While allowing flexible work in multiple concept mapping activities, we collect extra contributions from wider perspectives of workers; on the other hand, a more restrictive aggregation method is adopted to deal with extra contributions in later steps since we intend to converge the concept map.
To evaluate our approach of crowdsourcing concept maps, we recruited participants from Amazon’s Mechanical Turk to generate concept maps for three lecture videos and compared our results to expert-generated concept maps and ones generated by individual novices. We evaluated:
- The holistic quality of concept maps: Third-party evaluators, blinded to experimental conditions, rated in a 1-10 scale to indicate the overall quality of a concept map.
- The component quality of concept maps: Evaluators scored three components, namely concepts, links, and link phrases separately, and then we summed up the three scores as a total score.
Our result shows that ConceptScape generated concept maps with quality comparable to expert-generated concept maps, in terms of both holistic evaluation and component evaluation. ConceptScape also generated concept maps with higher component-level quality than individual novices.
To see if task flexibility brings in value, we further looked into the amount of extra contributions from workers. We found that workers indeed contributed more than what they were assigned to do.
Beyond crowdsourcing interactive concept maps for educational goodness, our crowdsourcing workflow design may also help inform those who aim to crowdsource open-ended work that requires higher-order thinking, such as those that demand cognitive analysis and creativity.
For more, see our full paper, ConceptScape or video.
Ching Liu, NTHU
Juho Kim, KAIST
Hao-chuan Wang, UC Davis