Understanding the Production and Consumption of Clickbaits in Twitter

With the growing shift towards news consumption, primarily through social media sites like Twitter, most of the traditional as well as new-age media houses are promoting their news stories by tweeting about them. The competition for user attention in such mediums has led many media houses to use catchy sensational form of tweets to attract more users – a process known as clickbaiting.  Examples of clickbaits include “17 Reasons You Should Never Ever Use Makeup”, “These Dads Quite Frankly Just Don’t Care What You Think”, or “10 reasons why Christopher Hayden was the worst ‘Gilmore Girls’ character”.

On one hand, the success of such clickbaits in attracting visitors to the news websites has helped mushrooming of several digital media companies. However, on the other hand, there are concerns regarding the news value the articles offer, drawing the demand for their blanket ban from many quarters. We believe that we need to consider all associated angles, especially the clickbait readers, before enforcing any drastic ban.

In this paper, we analyze the readership of clickbaits in Twitter. We collect around 12 Million tweets over eight months covering both clickbait and non-clickbait (or traditional) tweets, and then attempt to investigate the following research questions:

  • How are clickbait tweets different from non-clickbait tweets?
  • How do clickbait production and consumption differ from non-clickbaits?
  • Who are the consumers of clickbait and non-clickbait tweets?
  • How do the clickbait and non-clickbait consumers differ as a group?
The presence of different entities in both clickbait and non-clickbait tweets.

Our investigation reveals several interesting insights on the production of clickbaits. For example, clickbait tweets include more entities such as images, hashtags, and user mentions, which help in capturing the attention of the consumers. Additionally, we find that a higher percentage of clickbait tweets convey positive sentiments as compared to non-clickbait tweets. As a result, clickbait tweets tend to have a wider and deeper reach in its consumer base than non-clickbait tweets.

We also make multiple interesting observations regarding the consumers of clickbaits. For example, clickbait tweets are consumed more by women than men, as well as by younger people compared to the consumers of non-clickbaits. Additionally, they have higher mutual engagement among each other. On the other hand, non-clickbait consumers are more reputed in the community, and have relatively higher follower base than clickbait consumers.

Overall, we make two major contributions in this paper: (i) to our knowledge, this is the first attempt to understand the consumers of clickbaits, and (ii) while doing so, we also make the first effort to contextualize the rise of clickbaits with the tabloidization of news. We believe that this paper can foster further research going beyond only negative aspects of clickbaits, and help bring in a more holistic view of the online news spectrum.

For more, see our full paper, Tabloids in the Era of Social Media? Understanding the Production and Consumption of Clickbaits in Twitter, at CSCW 2018.

Abhijnan ChakrabortyIndian Institute of Technology Kharagpur, India
Rajdeep Sarkar,  Indian Institute of Technology Kharagpur, India
Ayushi Mrigen, Indian Institute of Technology Kharagpur, India
Niloy GangulyIndian Institute of Technology Kharagpur, India

Let’s Agree to Disagree: Fixing Agreement Measures for Crowdsourcing

In the context of micro-task crowdsourcing, each task is usually performed by several workers. This allows researchers to leverage measures of the agreement among workers on the same task, to estimate the reliability of collected data and to better understand answering behaviors of the participants.

While many measures of agreement between annotators have been proposed, they are known to suffer from many problems and abnormalities. In this work, we identify the main limits of the existing agreement measures in the crowdsourcing context, both by means of toy examples as well as with real-world crowdsourcing data, and propose a novel agreement measure based on probabilistic parameter estimation which overcomes such limits. We validate our new agreement measure and show its flexibility as compared to the existing agreement measures.


The majority of agreement measures is borrowed from data reliability theory, where the reliability of a set of grouped measurements is assessed via a comparison between the inter-group and the intra-group variability, and where typically the judgments are made by a fixed set of assessors. In the context of crowdsourcing, these measures suffer from many problems when they are used for estimating agreement instead of data reliability:

  1. The variability of judgments is typically higher when the judgments concentrate around the center of the scale. This problem is intrinsic to finite scale judgments and can lead to overestimating disagreement over items where the truth concentrates around the scale boundaries.
  2. The values around which judgments concentrate (if any) can be different item by item. This can lead to overestimating expected disagreement and thus increasing the possibility of considering the data as random.
  3. For some items a ground truth (e.g., `gold questions’ in crowdsourcing) might be present, that is a value around which judgments are expected to concentrate. This information is typically not used by classic agreement measures.
  4. The global variability-based correction by chance leads to many idiosyncrasies in the existing measures, making them hard to use in a crowdsourcing setting.

Our goal in this paper is to address the aforementioned issues, and to build a framework more suitable to estimate worker agreement over a group of tasks in a crowdsourcing context.


The intuition behind Φ is connected with the definition of agreement: we consider as agreement the amount of concentration around a data value. Conversely, if the data does not concentrate around a value then we have disagreement (negative agreement in our measure), that can be more or less strong depending on how polarized the different opinions are. More in detail, our approach can be described as fitting a distribution to the histogram of the judgments and then measuring the dispersion of such distribution.

It is important to notice that the fitting distribution has to be general enough to capture the main behaviors that might occur: flat (random judgments), bell-shaped (agreement), J-shaped (agreement around a value on the boundary of the scale), and U shaped distribution (disagreement), as shown in the following figure.

Agreement model examples

At the same time, the desired distribution has to have a minimal number of parameters, to avoid overfitting. For this reason, we use a Beta distribution to perform the fit: Φ is a transformed parameter of the Beta distribution over the histogram of the collected answers. Such parameter is related to the standard deviation of the fitted distribution, with the difference that here we account for the finiteness of the rating scale, and thus we adjust for the tendency of having lower dispersion when the data concentrates around a value on the boundaries of the rating scale. For example, if we imagine a scenario where assessors add a random Gaussian noise to the ground truth when making a judgment, we can immediately see that the dispersion will be minimum when the ground truth is on the boundary of the scale, because a Gaussian noise that would result is a judgment outside the boundary would be clipped.

The strength of our approach becomes apparent when applied to a group of items to be judged: in the case of relevance judgment tasks, each item i is allowed to have a different average relevance value, while the agreement among workers is defined as the common Φ that better explains the judgment data.

This allows to solve the problems that arise, in the other agreement measures, when trying to correct by chance by using the dispersion of the whole dataset as normalizing factor.


In the following figure we show a representation of the inference results for the judgments of 17 documents. We generated a small synthetic dataset, where the first document has an outlier on the right boundary, and the other 16 documents have a clear central agreement. In the figure it can be seen that documents 2-5 are replicated four times to get 16 documents that have higher agreement. We can see that the model is forced to find the best agreement level (dispersion of the Beta distribution) that collectively explain all the data: while document 1 alone would have been fitted with a high disagreement (a U shaped) Beta, the most probable Beta for the model to explain the whole dataset is the one where the first document has an outlier. This reflects the way we perceive the agreement level as humans, especially with a small set of data samples, and allows to get a robust estimation of agreement for group of documents.



You can test our tool (as shown in the snapshot below) and access the source code at this link.

Online tool
Online tool

For more information, see our full paper, Let’s Agree to Disagree: Fixing Agreement Measures for Crowdsourcing

Alessandro Checco, Information School, University of Sheffield


Report: Second GroupSight Workshop on Human Computation for Image and Video Analysis

What would be possible if we could accelerate the analysis of images and videos, especially at scale? This question is generating widespread interest across research communities as diverse as computer vision, human computer interaction, computer graphics, and multimedia.

The second Workshop on Human Computation for Image and Video Analysis (GroupSight) took place in Quebec City, Canada on October 24, 2017, as part of HCOMP 2017. The goal of the workshop was to promote greater interaction between this diversity of researchers and practitioners who examine how to mix human and computer efforts to convert visual data into discoveries and innovations that benefit society at large.

This was the second edition of the GroupSight workshop to be held at HCOMP. It was also the first time the workshop and conference were co-located with UIST. A website and blog post on the first edition of GroupSight are also available.

The workshop featured two keynote speakers in HCI doing research on crowdsourced image analysis. Meredith Ringel Morris (Microsoft Research) presented work on combining human and machine intelligence to describe images to people with visual impairments (slides). Walter Lasecki (University of Michigan) discussed projects using real-time crowdsourcing to rapidly and scalably generate training data for computer vision systems.

Participants also presented papers along three emergent themes:

Leveraging the visual capabilities of crowd workers:

  • Abdullah Alshaibani and colleagues at Purdue University presented InFocus, a system enabling untrusted workers to redact potentially sensitive content from imagery. (Best Paper Award)
  • Kyung Je Jo and colleagues at KAIST presented Exprgram (paper, video). This paper introduced a crowd workflow that supports language learning while annotating and searching videos. (Best Paper Runner-Up Award)
  • GroundTruth (paper, video), a system by Rachel Kohler and colleagues at Virginia Tech, combined expert investigators and novice crowds to identify the precise geographic location where images and videos were created.

Kurt Luther hands the best paper award to Alex Quinn.

Creating synergies between crowdsourced human visual analysis and computer vision:

  • Steven Gutstein and colleagues from the U.S. Army Research Laboratory presented a system that integrated a brain-computer interface with computer vision techniques to support rapid triage of images.
  • Divya Ramesh and colleagues from CloudSight presented an approach for real-time captioning of images by combining crowdsourcing and computer vision.

Improving methods for aggregating results from crowdsourced image analysis:

  • Jean Song and colleagues at the University of Michigan presented research showing that tool diversity can improve aggregate crowd performance on image segmentation tasks.
  • Anuparna Banerjee and colleagues at UT Austin presented an analysis of ways that crowd workers disagree in visual question answering tasks.

The workshop also had break-out groups where participants used a bottom-up approach to identify topical clusters of common research interests and open problems. These clusters included real-time crowdsourcing, worker abilities, applications (to computer vision and in general), and crowdsourcing ethics.

A group of researchers talking and seated around a poster board covered in sticky notes.

For more, including keynote slides and papers, check out the workshop website:

Danna Gurari, UT Austin
Kurt Luther, Virginia Tech
Genevieve Patterson, Brown University and Microsoft Research New England
Steve Branson, Caltech
James Hays, Georgia Tech
Pietro Perona, Caltech
Serge Belongie, Cornell Tech


Crowdsourcing the Location of Photos and Videos

How can crowdsourcing help debunk fake news and prevent the spread of misinformation? In this paper, we explore how crowds can help expert investigators verify the claims around visual evidence they encounter during their work.

A key step in image verification is geolocation, the process of identifying the precise geographic location where a photo or video was created. Geotags or other metadata can be forged or missing, so expert investigators will often try to manually locate the image using visual clues, such as road signs, business names, logos, distinctive architecture or landmarks, vehicles, and terrain and vegetation.

However, sometimes there are not enough clues to make a definitive geolocation. In these cases, the expert will often draw an aerial diagram, such as the one shown below, and then try to find a match by analyzing miles of satellite imagery.

An aerial diagram of a ground-level photo, and the corresponding satellite imagery of that location.

Source: Bellingcat

This can be a very tedious and overwhelming task – essentially finding a needle in a haystack. We proposed that crowdsourcing might help, because crowds have good visual recognition skills and can scale up, and satellite image analysis can be highly parallelized. However, novice crowds would have trouble translating the ground-level photo or video into an aerial diagram, a process that experts told us requires lots of practice.

Our approach to solving this problem was right in front of us: what if crowds also use the expert’s aerial diagram? The expert was going to make the diagram anyway, so it’s no extra work for them, but it would allow novice crowds to bridge the gap between ground-level photo and satellite imagery.

To evaluate this approach, we conducted two experiments. The first experiment looked at how the level of detail in the aerial diagram affected the crowd’s geolocation performance. We found that in only ten minutes, crowds could consistently narrow down the search area by 40-60%, while missing the correct location only 2-8% of the time, on average.


In our second experiment, we looked at whether to show crowds the ground-level photo, the aerial diagram, or both. The results confirmed our intuition: the aerial diagram was best. When we gave crowds just the ground-level photo, they missed the correct location 22% of the time – not bad, but probably not good enough to be useful, either. On the other hand, when we gave crowds the aerial diagram, they missed the correct location only 2% of the time – a game-changer.

Bar chart showing the diagram condition performed significantly better than the ground photo condition.

For next steps, we are building a system called GroundTruth (video) that brings together experts and crowds to support image geolocation. We’re also interested in ways to synthesize our crowdsourcing results with recent advances in image geolocation from the computer vision research community.

For more, see our full paper, Supporting Image Geolocation with Diagramming and Crowdsourcing, which received the Notable Paper Award at HCOMP 2017.

Rachel Kohler, Virginia Tech
John Purviance, Virginia Tech
Kurt Luther, Virginia Tech

Call for Participation: GroupSight 2017

The Second Workshop on Human Computation for Image and Video Analysis (GroupSight) is to be held on October 24, 2017 at AAAI HCOMP 2017 at Québec City, Canada. This promises an exciting mix of people and papers at the intersection of HCI, crowdsourcing, and computer vision.

The aim of this workshop is to promote greater interaction between the diversity of researchers and practitioners who examine how to mix human and computer efforts to convert visual data into discoveries and innovations that benefit society at large. It will foster in-depth discussion of technical and application issues for how to engage humans with computers to optimize cost/quality trade-offs. It will also serve as an introduction to researchers and students curious about this important, emerging field at the intersection of crowdsourced human computation and image/video analysis.

Topics of Interest

Crowdsourcing image and video annotations (e.g., labeling methods, quality control, etc.)
Humans in the loop for visual tasks (e.g., recognition, segmentation, tracking, counting, etc.)
Richer modalities of communication between humans and visual information (e.g., language, 3D pose, attributes, etc.)
Semi-automated computer vision algorithms
Active visual learning
Studies of crowdsourced image/video analysis in the wild

Submission Details

Submissions are requested in the following two categories: Original Work (not published elsewhere) and Demo (describing new systems, architectures, interaction techniques, etc.). Papers should be submitted as 4-page extended abstracts (including references) using the provided author kit. Demos should also include a URL to a video (max 6 min). Multiple submissions are not allowed. Reviewing will be double-blind.
Previously published work from a recent conference or journal can be considered but the authors should submit an unrevised copy of their published work. Reviewing will be single-blind. Email submissions to

Important Dates

August 14August 23, 2017: Deadline for paper submission (5:59 pm EDT)
August 25, 2017: Notification of decision
October 24, 2017: Workshop (full-day)


ReTool: Interactive Microtask and Workflow Design through Demonstration

Recently, there has been an increasing number of crowdsourcing microtasks that require freeform interactions directly on the content (e.g. drawing bounding boxes over specific objects in an image; or marking specific time points on a video clip). However, existing crowdsourcing platforms, such as Amazon Mechanical Turk (MTurk) and CrowdFlower (CF), do not provide direct support for designing interactive microtasks. To design interactive microtasks, especially interactive microtasks with workflows, requesters have to use programming-based approaches, such as Turkit and AMT SDKs. However, the need of programming skills sets a significant threshold for many requesters.


To lower the barrier of entry for designing and deploying interactive microtasks with workflows, we developed ReTool, a web-based tool that simplifies the process by applying “Programming by Demonstration” (PbD) concept. In our context, PbD refers to the mechanism by which requesters design interactive microtasks with workflows by giving an example of how the tasks can be completed.

Working with ReTool, a requester can design and publish microtasks following the four main steps:

  • Project Creation: The requester creates a project and uploads a piece of sample content to be crowdsourced.
  • Microtask and Workflow Generation: Depending on the type (text or image) of the sample content, a content specific workspace is generated. The requester then performs a sequence of interactions (e.g. tapping-and-dragging, clicking, etc.) on the content within the workspace. The interactions are recorded and analyzed to generate interactive microtasks with workflows.
  • Previewing Microtask Interface & Workflow: The requester can preview microtasks and workflows, edit instructions and other properties (e.g. worker number), add verification tasks and advanced workflows (conditional and looping workflow) at this step.
  • Microtask Publication: The requester uploads all content to be crowdsourced and receives a URL link for accessing available microtasks. The link can published to crowdsourcing marketplaces or social network platforms.

We conducted a user study to find out how potential requesters with varying programming skills use ReTool. We compared ReTool with a lower bound baseline, MTurk online design tool, as an alternative approach. We recruited 14 participants from different university faculties, taught them what is crowdsourcing and how to design microtasks using both tools. We then asked them to complete three design tasks. The results show that ReTool is able to help not only programmers, but also non-programmers and new crowdsourcers to design complex microtasks and workflows in a fairly short time.

For more details, please see our full paper ReTool: Interactive Microtask and Workflow Design through Demonstration published at CHI 2017.

Chen Chen, National University of Singapore

Xiaojun Meng, National University of Singapore

Shengdong Zhao, National University of Singapore

Morten Fjeld, Chalmers University of Technology

Respeak: Voice-based Crowd-powered Speech Transcription System

Recent years have seen the rise of crowdsourcing marketplaces like Amazon Mechanical Turk and CrowdFlower that provide people with additional earning opportunities. However, low-income, low-literate people in resource-constrained settings are often unable to use these platforms because they face a complex array of socioeconomic barriers, literacy constraints and infrastructural challenges. For example, 97% of the households in India do not have access to an Internet connected computer, 47% of the population does not have access to a bank account, and around 72% are illiterate with respect to English.

To design a microtasking platform that provides additional earning opportunities to low-income people with limited language and digital literacy skills, we designed, built, and evaluated Respeak – a voice-based, crowd-powered speech transcription system that combines the benefits of crowdsourcing with automatic speech recognition (ASR) to transcribe audio files in local languages like Hindi and localized accents of well-represented languages like English. Respeak allows people to use their basic spoken language skills – rather than typing skills – to transcribe audio files. Respeak employs a multi-step approach, involving a sequence of segmentation and merging steps:

  • Segmentation: The Respeak engine segments an audio file into utterances that are each three to six seconds long.
  • Distribution to Crowd Workers: Each audio segment is sent to multiple Respeak smartphone application users who listen to the segment and re-speak the same words into the application in a quiet environment.
  • Transcription using ASR: The application uses a built-in Google’s Android speech recognition API to generate an instantaneous transcript for the segment, albeit with some errors. The user then submits this transcript to the Respeak
  • First-stage Merging: For each segment, the Respeak engine combines the output transcripts obtained from each user into one best estimation transcript by using multiple string alignment and majority voting. If errors are randomly distributed, aligning transcripts generated by multiple people reduces the word error rate (WER). Each submitted transcript earns Respeak users a reward of mobile talktime depending on the similarity between the transcript they submitted and the best estimation transcript generated by Respeak. Once the cumulative earnings of a user reaches 10 INR, a mobile talktime transfer of the same value is processed to them.
  • Second-stage Merging: Finally, the engine concatenates the best estimation transcript for each segment to yield a final transcript.
Respeak System Overview
Respeak System Overview

We conducted three cognitive experiments with 24 students in India to evaluate:

  • How audio segment length affects content retention and cognitive load experienced by a Respeak user
  • The impact on content retention and cognitive load when segments are presented in a sequential vs. random order
  • Whether speaking or typing proves to be a more efficient and usable output medium for Respeak users.

The experiments revealed that audio files should be partitioned by detecting natural pauses to yield segments of less than 6-seconds in length. These segments should be presented sequentially to ensure higher retention and less cognitive load on users. Lastly, speaking outperformed typing not only on speed, but also on the WER suggesting that users should complete micro-transcription tasks by speaking rather than typing.

We then deployed Respeak in India for one month with 25 low-income college students. The Respeak engine segmented 21 widely varying audio files in Hindi and Indian English into 756 short segments. Collectively, Respeak users performed 5464 micro tasks to transcribe 55 minutes of audio content, and earned USD 46. Respeak produced transcriptions with an average WER of 10%. The cost of speech transcription was USD 0.83 per minute. In addition to providing airtime to users, Respeak also improved their vocabulary and pronunciation skills. The expected payout for an hour of their time was 76 INR (USD 1.16) – one-fourth of the average daily wage rate in India. Since voice is a natural and accessible medium of interaction, Respeak has a strong potential to be an inclusive and accessible platform. We are conducting more deployments with low-literate people and blind people to examine the effectiveness of Respeak.

For more details, please read our full paper published at CHI 2017 here.

Aditya Vashistha, University of Washington

Pooja Sethi, University of Washington

Richard Anderson, University of Washington

Communicating Context to the Crowd


Crowdsourcing has traditionally consisted of short, independent microtasks that require no background. The advantage of this strategy is that work can be decoupled and assigned to independent workers. But this strategy struggles to support tasks that are increasingly complex such as writing or programming and are not independent of their context.

For instance, imagine that you ask a crowd worker to write a biography for a speaker you’ve invited to your workshop. After the work is completed you realize that the biography is written in an informal, personal tone. This is not technically wrong, it’s just not what you had in mind. You realize that you could have added a line to your task description asking for a formal/academic tone. However, there are countless nuances to a writing task that can’t all be predicted beforehand. This is what we are interested in: the context of a task meaning the collection of conditions and tacit information surrounding the task (e.g. the fact that the biography is needed for an academic workshop).



OUR APPROACH IS TO ITERATE: do some work, communicate with the requester, and edit to fix errors. How can we support communication between the requester and crowd workers to maximize benefits while minimizing costs? If achieved, this goal would create the conditions for crowd work that is more complex and integrated than currently possible.

The main take away is to support this communication through structured microtasks. We have designed 5 different mechanisms for structured communication:


We compare these methods in two studies, the first measuring the benefit of each mechanism, and the second measuring the costs to the requester (e.g. cognitive demand). We found that these mechanisms are most effective when writing is in the early phases. For text that is already high quality, the mechanisms become less effective and can even be counter-productive.


We also found that the mechanisms had varying benefits depending on the quality of the initial text. Early on, when content quality is poor, the requester needs to communicate major issues. Therefore identifying the “main problem” was most effective at improving writing. Later, for average quality content, the different mechanisms have relatively similar added value.

Finally, we found that the cost of a mechanism for the requester is not always correlated with the value that it adds. For instance, for average quality paragraphs, commenting/editing was very costly but did not provide more value than simply highlighting.


For more, see our full paper, Communicating Context to the Crowd for Complex Writing Tasks.
Niloufar Salehi, Stanford University
Jaime Teevan, Microsoft Research
Shamsi Iqbal, Microsoft Research
Ece Kamar, Microsoft Research

Who does make a topic trending on Twitter?

Users on social media sites like Twitter are increasingly relying on crowdsourced recommendations called Trending Topics to find important events and breaking news stories. Topics (mostly keywords, e.g., hashtags) are recommended as trending when they exhibit a sharp spike in their popularity, i.e., their usages by the crowds suddenly jump at a particular time.

While prior works have attempted to classify and predict Twitter trending topics, in this work, we ask a different question — who are the users who make different topics worthy of being recommended as trending?

Specifically, we analyse the demographics of the crowds promoting different trending topics on the Twitter social media. By promoters of a topic, we refer to the users who posted on the topic before it became trending, thereby contributing to the topic’s selection as a trend.

We gathered extensive data from Twitter from July to September, 2016, including millions of users posting on thousands of topics, both before and after the topics became trending. We inferred three demographic attributes for these Twitter users — their gender, race (Asian / Black / White), and age — from their profile photos.

Looking at the demographics of the promoters reveals interesting patterns. For instance, here are the gender and racial demographics of the promoters of some of the Twitter trends on 3rd May 2017:

  • #wikileaks: 24% women, and 76% men
  • #wednesdayWisdom: 52% women, and 48% men
  • #comey: 9% Asian, 12% Black, and 79% White
  • #BlackWomenAtWork: 15% Asian, 52% Black, and 33% White

It is evident that different trends are promoted by widely different demographic groups.


Our analysis led to the following insights:

  • A large fraction of trending topics are promoted by crowds whose demographics are significantly different from Twitter’s overall user population.
  • We find clear evidence of under-representation of certain demographic groups among the promoters of trending topics, with mid-aged-black-females being the most under-represented group.
  • Once a topic becomes trending, it is adopted (i.e., posted) by users whose demographics are less divergent from the overall Twitter population, compared to the users who were promoting the topic before it became trending.
  • Topics promoted predominantly by a single demographic group tend to be of niche interest to that particular group.
  • During events of wider interest (e.g., national elections, police shootings), the topics promoted by different demographic groups tend to reflect their diverse perspectives, which could help understand the different facets of public opinion.

Try out our Twitter app to check demographics of the crowds promoting various trends.

For details, see our full paper, Who Makes Trends? Understanding Demographic Biases in Crowdsourced Recommendations, at ICWSM 2017.

Abhijnan Chakraborty, IIT Kharagpur, India and MPI-SWS, Germany
Johnnatan Messias, Federal University of Minas Gerais, Brazil
Fabricio Benevenuto, Federal University of Minas Gerais, Brazil
Saptarshi Ghosh, IIT Kharagpur, India
Niloy Ganguly, IIT Kharagpur, India
Krishna P. Gummadi, MPI-SWS, Germany

Don’t Bother Me. I’m Socializing!

IT DOES BOTHER US when we see our friends checking their smartphones while having a conversation with us. Although people want to focus on a conversation, it is hard to ignore a series of notification alarms coming from their smartphones. It is reported that smartphone users receive an average of tens to hundreds of push notifications a day [1,2]. Despite its usefulness in immediate delivery of information, an untimely smartphone notification is considered a source of distraction and annoyance during social interactions.

(Left) Notifications interrupt an ongoing social interaction. (Right) Notifications are deferred to a breakpoint, in-between two activities, so that people are less interrupted by notifications.


TO ADDRESS THIS PROBLEM, we have proposed a novel notification management scheme, in which the smartphone defers notifications until an opportune moment during social interactions. A breakpoint [3] is a term originated from psychology that describes a unit of time in between two adjacent actions. The intuition is that there exist breakpoints in which notifications do not, if so minimally, interrupt a social interaction.

A screenshot of the video survey. Participants are asked to respond whether this moment is appropriate to receive a notification.

TO DISCOVER SUCH BREAKPOINTS, we devised a video survey in which participants watch a typical social interaction scenario and respond whether prompted moments in the video are appropriate moments to receive smartphone notifications. People responded that the following four types of breakpoints are appropriate breakpoints in a social interaction; (1) a long silence, (2) a user leaving the table, (3) others using smartphones, and (4) a user left alone.

Types of social context detected by SCAN.

BASED ON THE INSIGHTS FROM THE VIDEO SURVEY, we designed and implemented a Social Context-Aware smartphone Notification system, SCAN, that defers smartphone notifications until a breakpoint. SCAN is a mobile application that detects social context using only built-in sensors. It also works collaboratively with the rest of the group members’ smartphones to sense collocated members, conversation, and others’ smartphone use. SCAN then classifies a breakpoint based on the social context and decides whether to deliver or defer notifications.

SCAN HAS BEEN EVALUATED on ten groups of friends in a controlled setting. SCAN detects four target breakpoint types with high accuracy (precision= 92.0%, recall= 82.5%). Most participants appreciated the value of deferred notifications and found the selected breakpoints appropriate. Overall, we demonstrated that breakpoint-based smartphone notification management is a promising approach to reducing interruptions during social interactions.

WE ARE CURRENTLY EXTENDING SCAN to apply it to various types of social interactions. We also aim to add personalized notification management and to address technical challenges such as system robustness and energy efficiency. Our ultimate goal is to release SCAN as an Android application in Google Play Store and help users to be less distracted by smartphone notifications during social interactions.

You can check out our CSCW 2017 paper to read about this work in more detail.  

“Don’t Bother Me. I’m Socializing!: A Breakpoint-Based Smartphone Notification System”. Proceedings of CSCW 2017. Chunjong Park, Junsung Lim, Juho Kim, Sung-Ju Lee, and Dongman Lee (KAIST)

[1]”An In-situ Study of Mobile Phone Notifications”. Proceedings of MobileHCI 2014. Martin Pielot, Karen Church, and Rodrigo de Oliveira.
[2] “Hooked on Smartphones: An Exploratory Study on Smartphone Overuse Among College Students”. Proceedings of CHI 2014. Uichin Lee, Joonwon Lee, Minsam Ko, Changhun Lee, Yuhwan Kim, Subin Yang, Koji Yatani, Gahgene Gweon, Kyong-Mee Chung, and Junehwa Song.
[3] “The perceptual organization of ongoing behavior”. Journal of Experimental Social Psychology 12, 5 (1976), 436–450. Darren Newtson and Gretchen Engquist.