My wife and I recently sat down to watch something on Netflix after we had tucked the kids away to bed. She recently had a friend recommend watching Bird Box and Netflix has been promoting the movie pretty heavily. While watching the movie my wife discovered the #BirdBoxChallenge meme which made sense to me. You see a movie where someone navigates their day blindfolded and you want to see what it’s like. It wasn’t until I was in the office Monday morning that an idea was spawned. Talking to colleagues we joked about having an office-wide Bird Box Challenge to increase empathy for our users. While the thought was interesting, I don’t think the lost productivity for our developers writing code with a screen reader was feasible.
It was feasible for me to do an experiment of one and experience it for myself. I wanted to test out two different scenarios. One, I wanted to see what the users of an ICF Next site experience. Two, I wanted to see how difficult it would be to develop blindfolded. I took the challenge to hopefully encourage others to try it on their own.
With the recent popularity of Bird Box it’s a perfect time to get more familiar with our visually impaired website users. While Netflix has warned you not to do anything stupid for the #BirdBoxChallenge, it’s perfectly safe to sit at your desk and try out tools you may be unfamiliar with (blindfold not included).
I’ve worked on multiple projects to make sure my clients’ website comply with accessibility requirements, but I’ve always approached it from a rules and regulation perspective. You may know that accessibility is a spectrum and each company must set standards they will follow. When implementing I’ve typically followed and recommended what we will actually do on the site to conform to those rules. I have not tried to actually use my computer without being able to see the screen.
VoiceOver
So, since I’m a Mac user the first thing I did was turn on VoiceOver. I found it in my System Preferences.
I checked the Enable VoiceOver checkbox.
I then got the Welcome to VoiceOver prompt which allows you to go through a tutorial on how to use VoiceOver
Once I had gotten familiar with how to navigate with VoiceOver (this was an iterative process), I decided to take a test drive on the blog you are reading.
A New Language
The first thing I noticed was you somewhat need an understanding of the HTML structure of the page. For Chrome I started in the outer application and then dug into the web content. I was using only the keyboard to navigate the pages. There is an art to knowing when you want to navigate a sibling object, dig deeper, or hit a dead end and need to go up to a parent. If I would go too deep, I would get stuck. I couldn’t go left, right or down. Up was the only option. I would progressively go up until I was able to visit a sibling through left or right.
Blog Dates
The blog dates were an interesting piece of content to navigate. It was read as a group and you had to go down a level to actually hear the date. And then you were trapped and had to go back up to continue navigating the page to Categories. Looking at the HTML it was a time element and did not have multiple levels. So I’m still not sure why the screen reader saw it that way. VoiceOver should at least handle time elements consistently so once you encounter one you will just know what to do.
Older Posts
The Older Posts link at the bottom was the opposite problem. The only way to get to the link was to go down and then to a sibling. It’s like trying to find your way through a maze blindfolded. You just keep feeling around until you find an opening for new content.
Navigating the Page
There were also two different ways to navigate. Sometimes you needed to use the voice over keyboard commands and sometimes the browser keyboard commands. It’s a lot to keep in your head when you are new to the tool, but I’m sure with practice it becomes more natural.
I also noticed when navigating the major section of the page (header, main, complementary, footer) you have to understand what each of those mean. “Complementary” is an adjective without a noun to define what it is. It should probably be “complementary navigation”, “complementary content” or something less ambiguous. This comes from how the aside HTML element is read by VoiceOver, but you must understand how complementary is commonly used to understand.
Image Alt Text
Alt text or title text on images is low hanging fruit, but it’s also difficult because it’s an ongoing issue. So of course I browsed the site just to see if the featured article images were descriptive with VoiceOver. I found one which was just “115” which gave no indication of what was in the image (You may not find the image as hopefully it’s correct). For the most part defaulting to the filename was descriptive enough and VoiceOver knew that a dash in the title indicated a separate word.
It’s difficult because each author has to know that including text with the image is important. The more authors you have the more error prone you are in your content entry.
Desktop Application – Chrome
The original intent of activating the VoiceOver was to see just how difficult it would be to be visually impaired and develop. So next I opened the Chrome DevTools. It was completely unhelpful. The Elements tab just said it was a table and there was no way to navigate it. I think you have to use a combination of Chrome keyboard shortcuts and your mouse for elements to be read.
I’m sure I could get better at using it, but it would be a slow learning process.
There were small things too. The image below shows how it wouldn’t read the dropdown options for me. I’m guessing this has something to do with the Chrome Extension that’s providing the option, but it’s not clear whether it’s Chrome or the Extension.
Desktop Application – IntelliJ
I decided to try an IDE and see what kind of support is offered. After trying to get navigate a project and the code in IntelliJ I gave up after about 15 minutes. It was impossible.
Trying to navigate the code with the reader was impossible. There are obviously shortcut keys that the application understands that makes it usable, but the reader doesn’t help. Notice below there are 1 of 0 columns in the UI telling me it’s not accessible. Also, notice how the Tip of the Day is on top and the window behind is in focus. I could use the application without ever interacting with the Tip of Day. On the bright side it won’t interfere with me using the editor since I can’t see that it’s covering the code. It might be confusing if I’m trying to click around.
I did read that IntelliJ is improving their accessibility support and this is just a snapshot in time for the version I’m running. They have closed a few issues https://youtrack.jetbrains.com/issue/IDEA-111425, but if you look at the linked issues accessibility is not solved.
Using IntelliJ made me ask the question “So what IDE is most usable?”. I came across several notable posts:
- Microsoft: Rethinking IDE Accessibility: https://www.microsoft.com/en-us/research/blog/codetalk-rethinking-ide-accessibility/
- The Tools of a Blind Programmer: https://www.parhamdoustdar.com/2016/04/03/tools-of-blind-programmer/
- A Vision of Coding, Without Opening your Eyes: https://medium.freecodecamp.org/looking-back-to-what-started-it-all-731ef5424aec
- Quora question on programming: https://www.quora.com/How-does-a-visually-impaired-computer-programmer-program
So everyone’s approach is different but they have all had to adapt to the technology available to them. In most cases they have to solve their own problems and build a system that works for them.
Other Screen readers
Obviously VoiceOver isn’t the only tool available for screen reading and it’s not the only tool you should test for. There is existing research on the topic. JAWS is obviously the leader, but there are contenders such as NVDA. If you would like dig a little deeper WebAIM does a survey on users’ behavior and preferences for screen readers https://webaim.org/projects/screenreadersurvey7/.
Conclusion
This experiment was eye opening to see how truly inaccessible web pages and applications can be. I had to learn a whole new language of navigating my laptop. I understood the terminology used, but I’m in technology. Even then some of the terms were ambiguous and did not give me an indication of where they would be in the layout.
When titles or descriptions are missing for visual elements it reminds me of how machine learning (ML) models interpret images and how adversarial examples create false positives. You see an image of a kitten, but someone removed the information that tells the ML model what it is. The ML model identifies it as a gorilla. A human can clearly see it’s a kitten, but the features the ML model expected were missing. The same can be said when your website doesn’t have an accessible version of the content. Computers just keep on computing, but with users it leads to frustration and misinformation.
I know vision is only one accessibility concern for a website or application, but I encourage you to experiment with your own. As my colleague said there is a cost saving associated with thinking about accessibility at the start “Plan Ahead! Accessibility, Analytics, and SEO“. When you see what your users see and hear, you will get a better understanding of how they feel.
And if you find yourself in a post-apocalyptic thriller forced to be blindfolded, you will be able to navigate your way through Google Maps as you are traveling down the river. #preppers #birdboxchallenge #zombieapocalypse
Robb, thanks for being part of the experiment and passing on the results. As you mentioned, always good to keep in mind and no way better than to experience it yourself. Years ago I worked with a developer who happened to be completely blind. She had a much more sophisticated and dedicated screen-reader, but still would express frustrations at time with doing development for visual interfaces. Given the increased complexity of the visual UX, seems like it would be a never-ending challenge.