What are machines learning?
And how might that change how we design products?

“Computers are able to see, hear and learn. Welcome to the future.” — Dave Waters
Machines are learning to do exciting things: they have beaten us at chess, Jeopardy! and Go. Machines can figure out what a cat is all by themselves. But these major breakthroughs are just that — a showcase of what is feasible.
But machines are also learning less-exciting things: they are learning to get a grasp of our language, to serve us relevant information, to create content. This might not get quite as much attention, but I believe that these fundamental abilities will change our world way more profoundly than game-beating algorithms, at least in the near future.
This clearly isn’t the first article about machine learning and design. Many explain nicely what machine learning is. Few mention what machines are actually learning. I‘ll try to do just that — and I’ll dare to make some (likely very wrong) predictions about how this might impact the way we design products.
(1) Machines are learning to understand us more naturally.
Machines are starting to understand our vocal chatter well enough to book us into our favourite restaurant. Machines are able to comprehend our unstructured writings enough to power chatbots (if they do). And machines now get what’s on our pictures better than we do (but get it terribly wrong, sometimes).
This has the potential to completely change how we interact with our machines — talking and writing freely to them is not the stuff of sci-fi anymore. Let’s face it: smart speakers are here to stay.
This essentially allows us designers to design a way of interacting that can handle all our human ambiguity. Ambiguity here means that I might say something entirely different than you when we’re both trying to buy a train ticket. Still, a human understands us both. Machines will do too, likely very soon.
A thought experiment: with that, do we still need forms? They exist so we inconsistent humans enter the required information in a machine-understandable format; with a few more years of advancing natural language processing, that might just not be needed anymore. But then — how do users know what information is needed from them?
How do we design these new modes of human-computer interaction? A chatbot doesn’t give you that perfect experience with its thoughtful visuals; it just has to work. It has to understand what exactly a user is trying to achieve and react with an appropriate reply, in all conceivable contexts. The user experience largely depends on the algorithm. Design moves away from the visual.
The case is even clearer with smart speakers: the interface involves no visual component.
For me, that raises many questions, such as —
- How do we prototype a (non-visual) experience?
- How do we test it? May this be the time to bring back the Wizard of Oz?
- The experience is largely dictated by the algorithm and we product people clearly don’t own that. What’s the process behind algorithm development? How do we become part of that? What can we add to the mix?
- AR and VR are possibly the most recent examples of having to figure out a new modality. Can we get a head start by copying their successes and avoiding their failures?
What answers do we expect from our machines? If we can express ourselves naturally and expect machines to understand, then it is fair to assume that we expect those machines to reply in a fairly natural way, too. Google absolutely nailed it with Duplex (just listen to the ‘okay‘ and ‘thank you very much, thank you, bye bye‘). Alexa likes to laugh randomly which… wasn’t received quite so well. But what makes algorithmic behaviour appear natural? What are the nuances in a voice interaction that make it feel just right? What factors make a chatbot feel delightful, whilst still serving its basic purpose?
And why is this important?
This comes down to trust. We need to trust that the algorithm truly understands us and what we are trying to achieve (like maybe don’t offer porn to our kids). Oh, and we also need to trust the companies to respect our privacy. We don’t seem to show trust easily — a lack of trust is the main barrier to smart speaker adoption. But, there is a huge (and largely academic) body of work that tries to answer that question: What factors make it more likely that services are trusted by users? (surprising fact: the most important factor for user trust is that the machine just works 🤯)
In brief: Much of design work in the future won’t be visual, so we should figure out how to approach a world in which the quality of an algorithm dictates the quality of our users’ experiences.
(2) Machines are learning to find the best content.
Matching a user (or session) to a customer segment is a type of pattern matching — an exercise a machine excels at. Throw historical data into the mix (so an algorithm can tell what content has worked for each audience) and let it define what to show to users: voilà, you just created hyper-personalisation (and understood out how Netflix chooses what you should binge watch next and how Spotify creates playlists just for you).
Is that actually happening? Yup: a report from 2017 shows that about a third of companies rely on machine learning approaches or ‘algorithmic targeting’ — and marketing suites like Adobe Target are fairly shouty about their AI capabilities.
So: Let’s assume an algorithm is trained to understand what a user wants to do and is able to pull in the content pieces that worked best for this scenario before. That means we never quite know what mix of content is presented. How do we design for that?
We designers take pride in being able to take a collection of content and shape it into that one composition that works just perfectly. Now, that doesn’t work in this scenario. Neither does it work for many of today’s platforms — I’d assume that no two page impressions on Facebook ever show precisely the same content. And they get a few hundreds of billions of impressions per month (or so I’ve heard from no reliable source).
I think we might need to rely on our design systems, big time. We’ll have to move away from designing the content as such, moving towards creating aesthetically-pleasing compositions of content types. We’ll have to find the right categorisation scheme for the content types an algorithm might serve. We’ll have to find components that are optimal for each of those, in each context. We might need to establish a taxonomy so that the presentation can be tweaked based on the context a piece of content appears in. That’s a very different way of designing.
So, machines will decide what we get to see. Is that a good thing? Not always; when machines decide, they are overly confident about it. Even if they get it wrong. There’s no “I think you might like it”, but rather “this is definitely it!”. Josh Clark has given a brilliant talk about how this could (and should) be handled in design.
In brief: Algorithms will be able to decide what content to show. Working on design systems and creating compositions and rules for content types (rather than the content itself) will become the norm.
(3) Machines are learning to find the best presentation.
So: Machines are learning to understand what we need and serve appropriate content. Why stop here? What would it take for an algorithm to define or change the whole presentation based on what it‘d like?

We established before: personalisation is a thing and has been for quite some time. And it’s not just about what information is shown to you — but also how. Netflix, for example, produces a number of artworks for each title. An algorithm interprets why you’re most likely to be interested in a title (based on all you’ve watched before) and selects the appropriate artwork. You’re interested in Good Will Hunting because you’re more the romantic type? That cute kissing couple will do. You’re more into Robin Williams? Have him, then.
There’s also a fascinating case study by Jon Gold who tried to teach typography to algorithms (for the startup The Grid which seems temporarily dead). He essentially followed these steps:
1⃣️ Teach a machine the vocabulary. We analyse designs based on many factors, so machines need to learn how to see these. For typography, Jon chose x-height, contrast, font width and mulled all that into a similarity score. Now the machine is able to compare typefaces systematically.
2⃣️ Teach a machine the design rules. This might sound like blasphemy to a few of you, but large parts of our craft are (just) about following rules. Heck, there are even approaches like programmatic design. Jon started to create a list of rules that the algorithm could read… but then thought that it might be more economical to just have the algorithm look at gazillions of examples and try to unveil the rules itself.
3⃣️ Let algorithms observe our work. Let them learn from the best. 🖖 Jon hasn’t quite specified what this “digitally looking over designers’ shoulders” could entail, though.
4⃣️ Run the algorithm. 🏃
Unfortunately, his project never saw the light of day — but it offers a handy blueprint for how we could make algorithms learn design decisions. This falls within a bigger narrative of terms like algorithm-driven design / algorithmic design or mutative design. Examples are out there, too: Vox uses an algorithm to score and define the layout of its front page. Firedrop has developed an AI web designer and called him Sacha. Wix is hoping it can do the same, with a system it calls Advanced Design Intelligence; The Grid had tried something similar. Tailor Brand is testing whether AI-driven logo creation should be a thing; they’re also not the only ones.
So… in the section before I talked about how important design systems will be — and now that an algorithm might figure out how the presentation by itself. How does that go together? Err yeah, that’s a tricky one. I do not have a clear answer here, although I may allow myself to speculate just a little: how about a design system that contains multiple styles — and an algorithm that decides when to use which (following the same principles outlined for personalisation, just extended for styling)? 🤷♂
In brief: First attempts have been made to teach machines how to present things. I am fairly quiet about what this could mean for us in future… because I don’t know. 🙃 Maybe our roles will shift to training our algorithmic overlords? Or maybe we could use algorithms to make us better in our craft — more about that in (5).
(4) Machines are learning to create stuff.
A lot of the internet is fake. Half of its traffic comes from bots. I have yet to figure out how much of its content is automatically-generated (help me to answer it here) — but it’s a known problem that algorithms spit out some seriously weird kids videos on YouTube.
Machines aren’t just creating videos that are stitched together — they also create works of art (should we call it that?):

Machines write articles:
Australian political parties declared donations worth $16.7m in the 2017–18 financial year, according the the lastest figures from the Australien Electroal Commission. This amount is lower than usual, with donations averaging $25.2m a year over the past 11 years.The Labor party also declared $33.2m in “other receipts”, which includes money received from investments, but also includes money from party fundraisers where people pay for event tickets in lieu of donations.
(This was published by The Guardian, but written by a system called ReporterMate)
We can use machines to create very convincing (wrong) videos, the deep fakes:
Machines create pictures of people who don’t exist:

Machines come up with irresistible t-shirt designs:

One question bothers me: Is there a single use case where automatically-generated content is actually beneficial for the user? It makes sense that companies want to automate the generation of their content — it’s not cheap. But that shouldn’t come along with a worse experience for our users. How would we guide automatic content creation so it is appropriate for the ones who will have to consume it? How do we communicate to our users that a content piece was generated by an algorithm? Should we?
In brief: Machines are learning to create content. To me, it is largely unclear if a user might actually benefit from automatically-generated content.
(5) Machines are learning to support designers.
So far I’ve mostly talked about things that will impact what we design. These things will likely happen anyway, regardless of whether we designers want it to happen. But I’ve also mentioned that machines are learning a trick or two from the designers’ toolbox — and I think we should pay attention.
Machines are learning to generate alternatives.

Design is a process of exploring options and — often slowly and painfully — converging on an optimal approach. Finding alternatives isn’t trivial, especially after pouring all our efforts into an option that turned out to not be so optimal after all.
Can we use machines to help us with this? It’s not unthinkable: AutoDesk has started to offer a feature called generative design that generates millions of alternatives for a problem space that a designer specifies. I’ve not come across anything similar in the field of human-computer interaction yet; only a conference paper from 2010 that says that generative interface design should well be doable.
I’ve also outlined the approach on how to train machines to learn how to present content; I’m a bit out of my depth here but couldn’t this approach be refactored to generate design alternative we would have never conceived in the first place?

Machines are learning to convert scribbles into designs. Have you ever been bored by having to convert paper scribbles into high-fidelity designs (which are then translated into code)? Airbnb’s got you covered — if you follow a design system, that is.
Machines are learning to find the best colour palette for you. Colormind applies deep learning to help you with that.
Machines are learning what font sizes to use. There’s Huula Typesetter.
Or do you need help finding the right stock image?
Machines are probably learning a lot more right now. Adobe Sensei is the umbrella term for anything AI that gets applied to Adobe’s products. They haven’t released notable approaches for interface design yet, but I’d assume that there’s more to come.
I am sure there are more noteworthy approaches about this out there. So it’s fairly safe to say: if there are things in your workflow that are repetitive and require little to no thinking, then they’ll probably be automated in the near future. Luckily.
Will design be automated? Singularity is a big deal; a lot of attention has been devoted to the threat that artificial intelligence might supersede and enslave us (so better start worshipping it now). Less attention is paid to how we can use machine intelligence to augment and extend our intelligence, although that is the way more likely scenario.
How will the augmented designer look like? I believe that the things that machines are learning have a tremendous potential to streamline our craft and free up our time for the important stuff — but it seems we’re still doing our first steps in this area. Maurice Conti has given a brilliant talk about this.
In brief: Machines are learning many things that could make our work faster and better. And that’s a good thing. We should focus more work on what life as an augmented designer looks like.
(6) Machines are learning to represent our users and their needs.
We UX’ers (and user researchers) have a way of doing the same: we just prefer to call them personas. There are many ways to define what a persona is or should be, but I think we can all agree that it is some form of representation of a particular group in your user base. Or, in other words, it is a cluster of users that share certain characteristics. A persona is a pattern in your data. 👋, machine learning.
This hints at the possibility of automatic personas. There are some (segmentation) approaches that attempted to derive persona structure from data you already collect, though I have not yet seen a fully-working example. I’d love to see one, though — it’d mean that the personas would always be updated, certainly more true (they’re rarely grounded in data and often biased) and with that (hopefully) more useful.
In brief: It’d be conceivable that personas could be derived directly from data — and I’d love to see it!
To sum this all up: machines are learning to understand us in novel ways, represent our individual needs, find the most appropriate information (or even create it) and serve it in an optimal presentation. Oh, and they might also enable us to become better and faster in our craft. So…
What do we have to learn?
Can we learn to understand precisely how the machines learn?
Advancements in machine learning are part exciting, part mysterious — we input a lot of data, define a model, tweak countless parameters and then it sometimes develops incredible abilities (it often doesn’t). But there’s a catch: we have pretty much no idea what it is actually doing. And there are many things that each algorithm could be doing: neural networks, a commonly-used type of algorithm, can theoretically simulate any conceivable mathematical function (apologies for the random simultaneously boring and, I think, mind-blowing fact 🤐).
So how do we go about it? There are efforts like interpretable machine learning or explainable artificial intelligence (funded by the likes of the US Department of Defense) and countless studies that attempt to understand how algorithms work. To give an example: a bunch of researchers from Amsterdam showed how pixels are used differently by algorithms to make a decision of what’s on the picture.

This is an area where we as designers are not in the driving seat. But whenever you are tasked to design something that is touched by a self-learning algorithm, ask yourself (and your developers) a few questions: How can we explain — in the most simple way possible — what is going on? Is there a way to know what the algorithm does? If not, can we at least explain how it learnt what it does? And most importantly: does a user deserve to know this? (There are some interesting approaches for displaying what algorithms do.)
(I think this also generally important — if we can’t understand what our algorithms do, then how likely is it that we understand the brain when we get even better at simulating it? That’s something that the European Union bet a shy 1 billion Euros on — but that argument belongs elsewhere. 🤓)
Can we learn to focus on real user needs?
The development of groundbreaking new algorithms is almost always driven by pure technological feasibility and the promise that technological advancements are enough to become mad wealthy. Case in point: being fairly good at Atari 2600 games was worth $500'000'000.00 before.
A real user need rarely plays a role in the advancement of machine learning algorithms. Unless you’re Google: they came up with the process they call human-centred machine learning (certainly worth the 13-minute read). They found that a human-centred design approach allowed them to address a real need, guide what needs to be developed and deliver a trustworthy solution with their Clips camera (it’s still Google, so most reviewers found the camera creepy, but that’s another story).
Could a human-centred approach help AI startups to differentiate in future?
Could it also help to channel the current AI hype into useful product development?
I’d believe so. We should sit down with our data scientists, analytics departments or data-savvy developers. Ask them to go a little your way: do they have use cases they are working with? Try to understand what they’re working on, try to figure out together whether there is potential for added value for the users.
But more importantly: Go a little their way, try to understand machine learning just as you have a grasp of web technologies. Get an understanding of what’s feasible, or — at the very least — conceivable. There are plenty of good materials to start with.
Why am I writing about this?
I’m a 🇨🇭-based UX guy, a neuroscientist by training and I have made machines learn in (slightly) new ways. Now I’m keen to explore how this CV puzzle of mine might fit together.
I’m yet to dive deeper into each of these aspects — give me a shout if you’d fancy joining that journey. ✊ Thanks for your time!