We need to talk about Accessibility on Chatbots

What happens when a blind person wants to use your chatbot?

Caio Calado
UX Collective

--

Bot Handshake — Illustration by Joe Roberto for OPSIN LAB

This idea started after I did a research on UX for autonomous cars or self-driving cars. I did some interviews with 4 people, one of them being blind. I was really surprised to know that she can fully take care of herself and go around using her phone and guide dog.

She uses her phone and her dog as interfaces to do something that (unfortunately) she is not able to.

After the interviews, I started my UX research and then, another surprise: the aspects of UX for self-driving cars which I noticed basically two:

  • Visual design“how can we let people know what the car sees?” Tons of (interesting) concepts of visual design to let people understand and see what the car sees while it’s driving itself.
  • Affordances“How can we make people interact with the car?” I have seen nice buttons, panels and clues that help people interact with the cars.

With those two main aspects in mind, I started questioning myself:

What about blind people? How will they use self-driving cars?

Of course we have Siri and Google built-in in some of these cars, but very few people are talking about conversational interfaces for cars at this point. Instead, people are talking about nice UX concepts for themeven though, one of the main inspirations for self-driving cars are blind people.

Systems are rarely designed with the needs of people with disabilities in mind. In fact, it’s fairly common that users who are blind or physically disabled are unable to use some applications. We all know some companies have dedicated entire teams to build the right accessibility standards into their products or services, but nobody is really talking about accessibility for ChatbotsI couldn’t find anything on Google.

If chatbots can be considered a software that you can talk and interact with it through messages (text or voice), I was curious about those same questions that I had with self-driving cars:

How would a blind person use a chatbot? How would he or she interact with it?

Since I work as an UX and Chatbot designer, I wanted to know if those questions would apply to chatbots as well.

I decided to do an experiment last week and it turned out into a big surprise.

The experiment

During my interview with the blind user, I started to learn more about Apple’s approach to helping blind people use its product. Apple’s devices have a functionality called VoiceOver: which is basically Siri helping you interact with anything that is accessible on your iPhone, iPad or computers.

To use this you just need to go to Settings > General > Accessibility > VoiceOver. There you can turn on/off this functionality. Note: there are also lots of independent programs and apps for accessibility.

Then, I decided to use Allo’s Google Assistant, random bots on Telegram and Slack, Poncho The Weather Cat, and the TechCrunch Chatbot on Facebook Messenger to figure how they would respond to Apple’s VoiceOver.

Here it’s what I found (use the audio):

Issue: list items are hard to use and interact with. VoiceOver reads some confusing text.
Issue: VoiceOver can’t recognize buttons and they aren’t really clickable.
Issue: VoiceOver wasn’t able to read some of the text messages.
Issue: VoiceOver was not able to recognize UI Components (quick-replies and persistent menu) and images.
Issue: VoiceOver is not able to recognize UI Components and buttons aren’t clickable.

I wasn’t really expecting that. It was a big surprise for me to find out that Apple’s VoiceOver wasn’t able to recognize chatbot UI components in different platforms.

For example:

  • Quick-replies were seen just as a “text-field”. VoiceOver was not able to understand neither what’s “value” of the text the or which kind of component a quick-reply is (a button?).
  • Generic-templates were simply ignored. VoiceOver just says “seen at xx:yy time.
  • Images were just seen as “images”. For the user, it is impossible to know that the image really contains.
  • Text messages were ignored on Telegram. VoiceOver wasn’t able to read anything.
  • You can’t use buttons. VoiceOver wasn’t able understand buttons in some apps, and for some reason it was blocking the interaction.

It didn’t take that long to realize that chatbots aren’t (yet) accessible on every platform — Messenger, Slack, Telegram and Allo all present similar issues.

Alright. How do we fix that?

Maybe if the platforms could just use some sort of tags or anything that enables text-to-speech (TTS) for screen readers would be great or giving tips about the actions available — something like Google’s Allo or Facebook (not Messenger) does pretty well. Or even enabling developers to use specific tags for components and even images or gifs — just like ALT for images in HTML.

But mostly, we should listen and understand disabled people in order to do something that is going to make their experiences better and more accessible. I believe we can start from this.

Whose fault is this?

After the experiment, I stated to thinking about who was responsible for this:

  • Developers? Since the developers can use the ALT-tags for images on websites which provides alternative information for an image, should the developers use a specific tag or anything?
  • Designers? Since every interface is a conversation and “chatbots” is the next big buzzword in design, should the designers solve this?
  • VoiceOver or such services/apps/programs? Since they are the ones saying out loud what’s on the screen, should they say something else, or not saying anything until they become smarter?
  • Platforms? Since we build our chatbots based on existing platform documentations, should platforms create a specific tag or parameter that helps TTS(Text-To-Speech) programs understand the UI components?

Honestly, I am not here to point any fingers, but I am here to start a conversation about accessibility on chatbots.

We have the opportunity to create an interface that is able to help disabled people (blind, deaf, everyone) in every medium (computer, phones, etc.). We as designers, developers, business people, and everybody involved in the Chatbot ecosystem should take this on consideration. We should not only focus on creating something just new, but on creating something that is new and accessible for everyone.

As a chatbot designer, I have noticed that tons of people who are able to see and read, don’t know how to use bots or what they are. Simply because the new UI components are just seen as just text. Usually I like to use text (“Tap on this”), emojis (“Slide these cards ⬅️️➡️️”) and everything that I can do to introduce bots and their UI components to the people who is going to use it.

If we want chatbots to be used by billions of people around the world, we need to make them accessible for everyone.

Since VoiceOver’s interface is not able to see and understand the UI components for chatbots, this interface is not able to do something that a blind person would need from it. As an UX designer, I need to design in order to solve people’s needs and pains, not only and just for users’ goals.

Disclaimer: since I talked a lot about existing platforms, it’s fair to say some of them have a dedicated team to build accessibility into their products — especially Facebook and Google. I’m sure all the main players in our industry are working on improving the experience for blind users.

As David Marcus said recently:

“It’s always about you! The reason we come to work every morning is to continue building a better and better product for you.”

I believe that all of this is going to be solved as soon as you read this post.
I don’t know how, but if anything… I am here to help as well.

Thanks for your time ❤

If you like what you just read, please recommend it. And please leave a comment with you thoughts. Thanks Rodrigo Siqueira, Ricardo Blumer Grobel and Sérgio Passos for your comments and thoughts.

--

--

Conversational Experience Designer Consultant and Advocate // Community Manager @ Bots Brasil