The disability and accessibility data desert is growing bigger every day
If you want to design for everyone, you need data that includes everyone. This way both physical and ethical risks can be avoided.

After reading about the Paralympian, Aramitsu Kitazono, being hit by a Toyota driverless vehicle at the Tokyo Paralympics, and the lack of intervention by staff who I seemed to be under the impression that AIs are infallible, this got me thinking about data and UX in general.
I am not the first and will certainly not be the last to point out that there is a data desert when it comes to people who are not standard issue.
I am purposefully avoiding the term, “disabled” because no-one is disabled until a designer forgets that not everyone’s the same. Remember that inaccessibility is designed-in before it has to be designed out.
But I worry that this accident is going to be the first of many, and there will be deaths, before people will start to realise that this is all down to human and not technical error.
Any AI is only as good as the data used to train it, and if the person who specified the training data did not ensure the specification included the preferences and behaviours of people who haven’t got average hearing or vision, or they move while in a seated position or do not process their surroundings in a neurotypical way, then we are training systems to injure or kill through design and technical negligence.
I recently read an article that was entitled Microsoft’s AI for Accessibility Is Addressing The Data Desert.
And TBH I was hugely disappointed as the content had little to do with the title. The Microsoft Accessibility AI is a wonderful piece of technology but the data desert has little or nothing to do with it. The desert is not in the development of assistive technologies but in the training datasets and UX research methodologies that are part of every product or service roadmap. Building an assistive technology AI framework isn’t even scratching the surface of the problem, even if it does give rise to some wonderful possibilities in assistive tech.
To be inclusive, training data harvesting has to be designed in a way that considers the breadth of humanity, if it is going to work for the breadth of humanity. If Toyota’s team had trained their AI to take into consideration the behaviours of people who might not be able to see or hear an oncoming vehicle, then that AI could not be expected to predict the behaviour of people with vision or hearing impairment, and it will make bad decisions. In this case it just ploughed into the pedestrian.
I’m not saying that this is the cause of what happened and it would be good to hear from Toyota about how they ensure that all AI training data is generated, but when you read about this and have seen accessibility left out of as many core technology projects over the years as I have, you start with the presumption that due diligence, competence and unconscious bias within the design team have to be brought into question.
Accessibility-related data isn’t easy. I have touched on this in previous articles and I will say again that qualitative data and anecdotes are useful to help pinpoint areas of exploration, but they will never give you insights into how well a design or technical approach is performing.
As AIs are becoming part of the equation then the lack of data that considers people who fall outside of what is average is going to mean the inclusion gap will grow ever wider and this will not just be about exclusion but also about personal safety.
Don’t be fooled either with companies that have AI-based products that claim to be able to predict or repair accessibility, as the ones I am currently looking at are all benchmarked on standards and guidelines rather than human needs. Anything UX and AI-related has to incorporate statistically significant human feedback on whether something worked or not. Without data on outcomes, a system cannot predict inclusive UX outcomes.
This all pivots on the approaches taken to design and research. If you are considering using an AI as the basis of a product, service, or system that us human facing, then make sure the training data has a feedback loop from human users that help the machine predict what good looks like, and for whom, and that whom should have the widest identifiable mix of people who experience barriers as possible
For UX, think about the UX obstacles people face and the preferences people have who experience them. Build tracking into your design research framework and have a quantitative feedback loop of user data that will enable you to continuously improve your product.
The reality is that sooner or later someone will be killed because of a lack of consideration by designers or engineers involved in the development of AI training sets, and then it will be interesting to see how the industry responds to what originated as designed human error rooted in ableist unconscious bias.
#a11y #accessibility #AI #ux #design #technology
Image courtesy of https://www.flickr.com/people/cedwardbrice/