Poorly maintained sidewalks pose considerable accessibility challenges for people with mobility impairments; however there are currently few mechanisms to determine accessible areas of a city a priori (e.g., before a person leaves their home). This is a significant problem: according to the most recent U.S. Census data, 30.6 million individuals have physical disabilities that affect their ambulatory activities. In our recent work, we investigated whether crowd workers from Amazon Mechanical Turk (turkers) could accurately find, label, and assess the physical accessibility of sidewalks from Google Street View (GSV) imagery.
Our overarching research goal is to build and investigate new, scalable mechanisms that combine crowdsourcing, HCI, and computer vision for determining the accessibility of the physical world. We plan to use data gathered from these approaches to build a suite of accessibility-aware mapping tools that inform governments, policy makers, and pedestrians alike about the inaccessible parts of their cities. Just as modern mapping services provide route recommendations based on traffic conditions or even an area’s historical crime rates, we aim to provide similar “smart routing” services for mobility impaired pedestrians tailored to their specific abilities.
Although some tools and mechanisms exist to report on street-level issues, most of these are reactive rather than proactive. For example, SeeClickFix.com allows concerned citizens to report potholes, street lamp outages, and other municipal problems via a website or mobile application. Similarly, many cities in the US offer 311 non-emergency municipal services to capture and track reported issues. Our approach is different in that we are actively building a knowledge-base of physical world accessibility rather than relying on reported problems. We note, however, that our approach is complementary and can work in concert with existing tools.
For our preliminary study, we showed turkers a manually curated set of 229 static images from GSV and asked them to perform three tasks: (i) identify the location of the sidewalk accessibility problem in the GSV image; (ii) categorize the type of the problem; and (iii) evaluate how severely the problem may obstruct a person’s path. For a detailed explanation of the outlined tasks, please see the instructional video we provided above.
Our study demonstrated that turkers can identify sidewalk accessibility problems in static GSV imagery with 81% accuracy without any quality controls and 93% accuracy with the addition of simple quality control schemes. As readers of this blog are likely familiar, worker quality can vary significantly. In Figure 2a above, for example, turkers provided highly consistent labels marking cars as obstructing the path. On the other hand, in Figure 2b, many turkers mislabeled objects such as trees, signs, and poles as obstacles even though they are not directly in the pedestrian pathway. We tested multiple quality control methods to filter out such low-quality work. We are currently exploring methods to provide active feedback to our turkers through injected “ground truth” images.
The work we describe above is the first study for my on-going Ph.D. thesis project and is setting the foundation for future works such as:
- Collecting accessibility data beyond sidewalks that include streets, building fronts, and bus stop environments
- Incorporating computer vision algorithms to automatically locate sidewalks and sidewalk accessibility problems
- Making a volunteer based web application where people can work on accessibility assessment tasks and help make their neighborhood more accessible
- Building accessibility-aware map tools such as an accessibility score index visualization and accessibility-aware routing algorithms
For more, see our full paper, Combining Crowdsourcing and Google Street View to Identify Street-level Accessibility Problems.