AI is the UX (is it, though?)

Have you heard this phrase bandied about? I have. And I have thoughts.

Chris Noessel
UX Collective

--

What does it mean? At first hearing, “AI is the UX” is a bit of poetry. AI is a big, vague, and shifting subject. UX is a big, vague, and shifting subject. So saying that one is the other feels weighty, like an important thing that we probably ought to understand. But seriously, what does it mean?

If I was to bring it down to more concrete terms about what it means in practice only requires changing one word.

Read: AI DOES the UX

And in doing so, the argument goes, AI will help businesses realize three major benefits.

  1. Software will no longer be prêt-à-porter, but rather perfectly tailored in real-time to the individual and the task via the AI.
  2. Users will not need to poll multiple systems with competing experiences to cobble together answers to business questions. They can just engage in dialog with AI. (Though no one has said so explicitly, it’s worth noting that with this setup the AI will gain mindshare as the partner and reduce the other apps to APIs.)
  3. Users will be able to conduct new queries and even functions of the business instantly, with no delay for the design and development of software to enable it.

It’s possible I’m missing something. But for now, let’s discuss each.

1. AI will create perfectly-tailored UX in real-time?

The vision is that, once a business is described semantically and a user’s needs (goals, authorizations, and constraints) in that context are clear, the AI can provide the things (content, system state, recommendations, and controls) that the user needs and asks for, in real-time. It is designed and developed (as such) on-the-fly by the AI. And when the AI gets things wrong, the user can tweak it per her preferences. (Antti Oulasvirta of Aalto University has been working in this space for a while. See his CV and my sketchnote of his talk in 2016.)

No longer, the idea goes, will users have to settle for pretty-close, or something that was designed for someone kind of like them. Personas were always a satisficing solution. *Pfft* Designing merely for aggregates. Indeed. (Monocole pop!)

That’s a bit of UX heresy. Is it useful heresy? Certainly, personas are a beleaguered concept. I can’t count the number of times in my 20+ year design career that I’ve come across someone saying something like, “[This thing that isn’t personas but I’m calling personas] is really terrible. Introducing [this alternate thing that’s really personas done well, but I’m calling it another thing]!” (But I did tweet it.)

The only genuinely novel and promising approaches I’ve seen are…

But I digress, and am happy to take personas again to first principles and see if AIUX is a genuine usurper.

So. Why do we even have personas?

Why do we build and design for these aggregates instead of, you know, people? One reason is scalability. It takes time and money to design, and there aren’t that many good designers in the world. (An old boss of mine liked to say there were only 200 really good interaction designers in the world. Even if he was off by an order of magnitude, that’s still a paucity, considering technology is coating everything these days.) So, have one designer design for a broad segment of users, and you can address that problem of scarcity. But once built, AI promises near-infinite scalability and the aggregate knowledge of all designers, so we can dismiss this justification in the long view.

The bigger reason we design for personas is that one size of software does not fit all users. Even when their jobs are the same, one user may be brand new and need a lot of explanations and hand-holding, and another may be a veteran interested only in efficiency and control. These differences become more pronounced as roles, organizations, domains, and even cultures diverge.

There are secondary benefits to personas. Personas fit the way people think. They get us into an intentional stance, which is the right one for design practice. But the benefits of any particular strategy are not a de facto argument against change. There might be more benefits with another strategy. So let’s keep going.

Do infinite sizes fit all?

Does this mean that every difference between users warrants a change in the UX? No. A hazel-eyed user may not need any special accommodations compared to a brown-eyed user. So there is some level of abstraction at which a change in UX makes sense, and some level of detail for which change would not make sense. What’s the cutoff? I’ve long been persuaded that the right place is at the level of user goals.

For example, one of the newbie’s goals is to learn the domain, and that should be accommodated. One of the pro’s goals is to maximize efficiency, and handholding explanations would only get in their way. Optimizing systems for users to achieve their goals makes for satisfying experiences and valuable customer loyalty, so has been part of best practices for a long time. (It does not address and may exacerbate tragedies of the commons scenarios, but that is a topic for another time.)

So we don’t need infinite sizes of software, when goal sets are relatively few, and we’re designing for them.

Is there harm in individual customization?

So OK, there isn’t a great reason to adjust software experiences for the user’s eye color. But is there any harm in customizing past goals, to individual preferences? Probably not for the individual user. And in fact, preference settings have been a part of software for a long time. Those provide controlled customization. Why then can’t we move the extra step to total customization? The answer requires that we back the camera up just a bit and look at the communities in which this software plays a role. Individuals may be the proximate users, but they are part of teams, and organizations, and professions.

Norms make meaning

In systems like money and language, the value comes from the fact that you and I both agree that the tokens have meaning. This is a dollar. We both agree that it has value. If we don’t agree, there is no market. The word “cat” means that furry thing here, and not the small pile of popcorn kernels over there. If we don’t agree, communication collapses.

Having software be mostly the same from user to user is one of those norms. It allows the expert to help the newbie. It allows for group training. It allows someone to sit down at an interface (whose prior user has just won the lottery, shouted “screw you, suckers!” and bolted off to Bora Bora) and take over the abandoned work. It makes it easier for people to share screencaps, get meaningful IT help, and to discuss techniques. Software norms grease the skids for communities of practice that use a given software.

So, at what level should a given UX stabilize across a population?

All of this is to say that while AIUX enables customization down to the individual, it may not be the right level of standardization. If I had to name a level where it makes sense, I’d say it is strongly at the level of the organization. That is, the accountant discussing sales backlog should be able to easily peer over the shoulder of the supply chain manager discussing the same thing and both of them immediately know where to look and what things mean.

Less strongly, I’d hope that UXAI stabilizes across communities of practice, to give employees more career mobility and to enable practices to more quickly advance.

UI is AI

I also want to take a moment to dispel the notion that UI is some necessary evil, there only because we haven’t had the right AIUX yet. UX/Interaction designers work to understand their users, consider lots of alternate options, and notably, reject lots and lots of bad ideas to get to a design that balances the wicked tradeoffs in ways that optimize for a desired set of effects. This is an invisible and often underrated function of design. The UI acts as a tool-for-doing but also as an attention focus, a method of understanding, an augmentation to memory and perception, and a place for collaboration.

It’s in this sense that UI is artificial intelligence. If it’s well designed, it keeps users focused, draws their attention to important changes, doesn’t bother them with pointless information, helps them take the next best action, and when confident, even takes it for them.

It’s artificial because they didn’t have to build the expertise and design or even think about the thing themselves. While a far cry from the agency-and-goal capabilities of semantically-aware AI, we should keep this is mind as part of the value that a human-designed UI provides.

Of course when we commonly say “AI,” we don’t mean things like UI or architecture, but it is useful to understand that each is a kind of embodied intelligence that we stand to lose if we don’t understand and defend its value.

Can an AI get there? With whose help?

So UI per se UI is valuable. Can AI produce this value? At a low level, I believe it can. It can likely do the matching of outputs and inputs to tasks. But it is at the higher level of working inside of complex (and ofttimes implicit) pattern languages while navigating wicked tradeoffs that I don’t yet expect narrow AI to handle well. Humans have to be in the loop for those issues. Which raises a next question: Who is the right sort of human to put in that loop?

We could partner the AI with a design steward, but that creates a bottleneck. So maybe a design steward would handle really tough problems and users handle simple lock problems themselves. Which raises its own question.

Do users want to be designers?

Developers and designers reading this might be tempted to answer this question emphatically, Yes! They would be happy to design their own software experiences. But I don’t believe this is true across the broad population. A small percentage of users will enjoy this task and do it well. They should probably be designers or developers. A slightly larger percentage will believe it’s a thing they can do, but wearing Dunning-Kruger blinders, with a result of terrible UX that they just don’t know enough to know it should be improved. The bulk would find it a distraction from their actual work.

Plus, offloading that task to an entire population of practitioners would slow down their productivity. We can (and should) enable sharing markets where users who do put in a bit of time to create effective modules for themselves can spread that goodness out to their org and maybe the larger community of practice.

Design takes skill, study, and years of practice to master. So where the AI gets it wrong, we can’t count on users to do it themselves and do it universally well.

Developers can Imagine their horror if I was to suggest that what people really need to do is code their own software. They can program it the way they want it! Of course, most businesses would grind to a halt under such a directive.

So let’s bypass this as a gate, and presume that an AI could be trained to get close to expert-human-designed UI such that the scalability tradeoff is worth it, and that designers or product managers handle the remainder of wicked-problem decisions.

Continually-changing UI is not usable

A last note on this topic. A constantly-changing UX is a bad UX. As content moves around, we shift a burden from the software to provide affordances to users to remember how to do things. The more things change on screen, the more information users have to register. It thwarts users’ ability to rely on spatial memory to manage their workflows, and that’s a very powerful thing to toss out with the bathwater.

Compare and contrast this image of Iron Man’s JARVIS from the original Iron Man movie. It looks exciting until you realize how much the system is expecting him to track and make sense of.

I’ve actually used this to argue that this is only good as a distracting, placebo interface.

Continually-changing UI is not, per se, a virtue, and we should take great care where we choose to sit between that and static UI.

(I know. Everything above was just 1 of 3? But that was the big one. The next two are much shorter…)

2. One-stop interface?

From a user-centered point of view, a one-stop interface to multiple services is quite compelling, and I can see three reasons why.

  1. People are having to poll many different data streams (and their disparate) interfaces to get their work done. The polling is a pain, and it risks missing information in systems the user is not currently in. Combining information from multiple systems promises a more holistic view. That’s great. The challenge is that each specialized system is good at its specialization, and for AIUX to compete against them for fitness-to-task requires replicating (at least) or besting their experiences. This is not a small task and would be a tough thing to declare beforehand. More likely a given UI will become the main place for information and action, with users jumping to other systems occasionally for specialist tasks.
  2. A one-stop interface gives the user access to the total information of their organization instantly. This gives any user the opportunity to get context for understanding problems and working through scenarios of solutions. It speeds up information gathering and context awareness for decision making. A valuable line of inquiry would be to ask how executives have used the information-poor environments to their benefit (salary disparity or other inequities) and which this information access would threaten; but I don’t feel super confident I could create a complete list myself.
  3. Access to a whole digital-twin model of an org could inform a sense of purpose to a user’s work. Dan Pink names purpose as one of three pillars of work satisfaction (the other two are mastery & autonomy). That promises long-term user loyalty that client organizations and IBM would love. I can even see an argument that the interaction design should reinforce this notion of the EBA as reinforcing a user’s sense of purpose.

3. Enables on-demand business knowledge?

One of the main benefits I hear touted about AIUX is that it allows users to ask anything of the model. Each new category of question doesn’t need to be coded fresh. I want to poke at this as a notion of user value vs. developer value.

Are there questions that businesses want answered but are just held back? Businesses aren’t dumb. They largely know what they need to know and have existing (if inefficient) ways to get that information now, or they wouldn’t be viable businesses. So for the day-to-day running of a business, I can’t imagine that new questions won’t be as valuable as having fast and contextualizing answers to known questions.

What AIUX will help with is addressing unknown unknowns, and dealing with sudden shifts in the business landscape. It makes business more nimble, as the system frees users to pose new questions and get answers when such things are needed.

Now certainly there is a developer value to the digital twin of a business. It enables “instant sprints”, as the system to take questions and answer them. There is no design and development backlog. No technical debt! Though cost savings are not as sexy as new offerings, I expect this will be a major benefit to organizations adopting AIUX.

There is one last benefit I can see emerging to this on-demand business knowledge.

On-demand anything is built on the notion that the system waits on inputs, on “demands.” But if a user knows to ask a question already, and that question is answerable by the system, why wait? In other writing (and another context), I described this as the “stoic guru” model vs. the “active academy” model. The stoic guru knows all but waits to be asked. (I ridiculed this as “lifeguarding by chatbot” once.) Compare that to the active academy, which has foundational questions in which they are interested, and it continually monitors available data and tests scenarios to see if new, better answers can be given.

The digital twin enables an active academy model. Agents that can be scripted to help the business monitor its place in the world for best practices, management tactics, compliance, and strategy. What a model would that be! This is several years down the pipe, but perhaps a huge promise that I see with the technology.

So what are you saying, here?

Mostly this short essay was to work through the meaning of this buzzword concept, and in so doing, to suggest a change in the ways that we talk about it, and the benefits we tout.

I have doubts that idiosyncratic, on-the-fly experiences are a good thing. Even if software makes them possible, they may be interesting mostly in the establishment of interfaces to a particular organization (and dealing with sea-changes to the business context) but not in the day-to-day experience. “AI is the UX,” then is not the best phrase. I don’t have a new one to offer because I want to make sure I understand the thing in this a proiri way, and hear what others think, before spending calories on a better descriptor.

I do see major opportunities in the creation of a businesses’ digital twin to save upfront development costs, to make organizations more nimble in seeing and reacting to change. I would expect it to grant individual users a fuller sense of purpose and to make better-contextualized and better-informed decisions. And lastly, to enable the creation of tools to move business management into an agentive age. That’s very exciting, so I hope I understand it right.

--

--

Chris is a 20+ year UX veteran, author, and public speaker. He delights in finding truffles in oubliettes. Tip me in coffee at ko-fi.com/chris_noessel.