Navigating through streets and within buildings might seem like a trivial activity, however, its often a challenge for people with visual impairment. Over the last few years, innovation in the space of sensors, devices and smartphone apps have attempted to improve universal access and make navigation easier. However, the technology is not there yet.
Current approaches still incur concerns about safety – unexpected danger from construction or vehicle placement can hurt a user and could have used real-time notifications or help. For example, a visually impaired person uses white cane as a primary mobility tool. This helps in keeping track of objects that are hindrances in the path he is taking. There might be some special situations like heavy hanging objects or objects that are protruding from the wall (can be artistic displays, etc) that cannot be tracked by the white cane but can cause severe head injury if not taken care of. Understanding these aspects are beyond sensor’s limitations, but easy for humans to comprehend.
In this project, we designed an approach to help address some of these problems by adding humans in the loop as sensors and actors to assist with accessibility questions/problems. For example, in the figure below, if a user (green) has a to go to a coffee shop (A or B), she can quickly query the route and can use the location based services like Twitter, etc., crowdsourcing approaches where crowd are the people (orange) in neighborhood who can help her by notifying about a problem if it exists. This can help inform the user’s decision making process, and the navigation system’s path recommendation.
The approach can be implemented using the workflow architecture shown below:
In this approach, an end user can make a request by setting abilities preferences with respect to time, cost, location, and more. The request can then be broadcasted to the people or volunteers in the neighborhood. The volunteers can then respond, providing the system with updated information about the situation with a click of a button. To create this system, we envisioned the possibility of using Twitter or a custom app.
Twitter Approach: Twitter being a very popular social media has attracted so many users who can help others in their area of location without even making a request to the volunteer or another user. In order to first understand this we have to get an idea if the user base in a given area is bigger and there are sufficient tweets from a given region. We considered Pittsburgh as our point of interest and we calculated the average frequency of tweets. As shown below, on an average there is atleast one tweet for every 12 seconds. Hence, reflecting a promising outcome.
Custom Application: As a part of brainstorming and prototyping process, we also developed a homescreen app by extending the concept of Twitch Crowdsourcing. This lets users provide and answer just by unlocking their phone. One particular use-case of this app is shown below. If a visually impaired user (VU) has a question about the presence of curb ramp near-by or the number of steps, this request will be visible to the people in the location of VU’s interest where they can make a binary decision by simply checking their mobile phones. This makes the entire experience seamless with minimal cognitive load.
We believe that by using situated crowdsourcing, we can overcome the limitations of current sensor technology and real-world deployment, and better empower people with visual disabilities to navigate through buildings or cities more independently.
 Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon Cheung, and Michael S. Bernstein. 2014. Twitch crowdsourcing: crowd contributions in short bursts of time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). ACM, New York, NY, USA, 3645-3654. http://dl.acm.org/citation.cfm?id=2556288.2556996