Product
Inclusive Mobile App Design at the MTA
On designing for blind and low-vision users riding the rails.
With the installation of over 50,000 digital screens across the Metropolitan Transit Authority’s (MTA) entire network, the MTA’s teams are creating new ways to communicate with riders in a targeted and timely manner. These screens are a front-line communication tool to showcase all sorts of content, which is exciting! However, that content wasn’t accessible to riders who are blind or low-vision.
The MTA reached out to Postlight to design and build a toolkit that could integrate with the MTA’s existing mobile app ecosystem available across iOS and Android. When approaching the design, we made it a priority to center the experience and perspectives of our core user base: blind and visually impaired passengers on the MTA.
Above all else, an inclusive design process requires communication. We began with a round of user interviews. We were fortunate to have a stakeholder on this project who is blind and who was able to connect us with other blind and low-vision advocates for interviews and testing. After many interviews with folks with accessibility needs, we were confident that the features we had initially scoped for this project were going to have a meaningful impact for riders. Here are a few key steps on our journey to the final product.
Create text alternatives for design
So many of our deliverables during the design of a mobile application or website aren’t accessible to stakeholders who are visually impaired. Flow charts built in tools like Whimsical and mockups designed in Figma or Sketch have something in common: They have no logical read order for screen readers to follow, so information within them can be difficult or impossible to understand.
We discovered a lightweight workaround for this hurdle: exporting artifacts into text-based outlines. Text documents have an easy-to-follow reading order, so as someone who doesn’t use a screen reader, it was simple for us to ensure that the information appeared in a logical way.
Build support for Native A11Y tools
During our interview phase, we learned that a majority of our testers owned iPhones, so iOS was the obvious choice. By building on iOS, we had access to the best native accessibility tools available, specifically Apple’s VoiceOver system in their vision accessibility kit. VoiceOver is a gesture-based screen reader built into iOS that allows users to interact with an application without seeing the UI. By using various gestures, a user can navigate through an interface, hear text read aloud, select or activate interface components, and more.
The best way to understand an app’s VoiceOver annotation needs as a developer is to turn VoiceOver on and attempt to use the app to its full extent, taking notes along the way. We dedicated two weeks to doing this, and took several pages of notes: This button isn’t accessible; it’s unclear what this data means; this table makes me swipe too many times to get where I want to go. We then followed these notes with the annotation work outlined above. Finally, we conducted usability tests with blind and low-vision users, who have deep experience using VoiceOver and were the true test of our implementation work. (If you’re curious about how this works and own an iPhone, you can turn VoiceOver on and try it for yourself!)
Expectations matter
In usability testing, we discovered that another crucial aspect in creating a useful app for users with accessibility needs is how quickly an app responds with the data they’ve requested. Users navigating with VoiceOver often have reading speed for this functionality increased to much faster than someone might listen to an audiobook, for example. This allows users to navigate through content as fast (if not faster) as those who can see an app’s visual interface, and this speed comes into play when an application makes requests from a server or other endpoints to deliver meaningful content.
Our demo app included a listing of nearby stops and stations for MTA service. We discovered in testing that some users didn’t get the stops they expected or didn’t see them load in time for the information they wanted to be relevant. We took a look at the API requests for these testers and found that the issue wasn’t with the endpoints, but with resources available to service these requests from the testing group. By scaling these resources up, the results loaded faster and we improved the experience.
Take a look at our work, and let us know what you think!
Kevin Barrett (he/him) is a Partner and Director of Engineering at Postlight. Reach out at hello@postlight.com or follow him on Twitter @kevboh.
Story published on Jun 29, 2022.