UX Collective

We believe designers are thinkers as much as they are makers. https://linktr.ee/uxc

Follow publication

Designing algorithm-friendly interfaces

Maximillian Piras
UX Collective
Published in
7 min readNov 18, 2020
An illustration of a smart phone with an algorithm spilling out of it.
Illustration by Maximillian Piras

A designer must be intricately familiar with her materials. In the past this meant understanding the nuanced properties of woods, metals, printing presses, & eventually pixels. Today’s digital designers must work with a much more intangible material: an algorithm.

They were once comparatively simple sets of rules an application followed to accomplish tasks, such as displaying posts by people you follow. Now they’ve evolved with artificial intelligence into infinitely complex fractal processes often beyond human comprehension. They power most of our daily experiences, but the majority of design literature on this new norm focuses on if these robots will replace us. Instead, let’s discuss how designers can better assist engineering counterparts by reframing design decisions to amplify algorithmic performance.

User-centered design is no longer enough, the interfaces of the future must be easy for people to use & easy for algorithms to analyze.

An illustration of an algorithm with a smiling face in the middle of it.
Illustration by Maximillian Piras

The needs of algorithms

Algorithms are responsible for most content surfaced in our digital products: posts populating social feeds, shopping suggestions in digital carts, & phrase recommendations in email drafts. They succeed by showing us what we want, when we want — just like a helpful assistant or store clerk. Self-proclaimed ‘humanist technologist’ John Maeda explains their goal in his latest book by likening it to the Japanese custom of ‘omotenashi’: anticipating what the customer wants without asking.

However, algorithms are not a solo act. They must be harmoniously paired with intelligently crafted interfaces in order to succeed.

Purpose & process

Most algorithms focus on automatically detecting patterns in data & subsequently making relevant recommendations. This process is achieved by pairing a specific dataset with analysis dimensions to create what is referred to as a model. It’s then trained by continuously feeding in more data over time, resulting in theoretical improvements. The output is often used to personalize a product: customizing each user’s experience.

“More personalization in the user experience usually means more relevance for users, which leads to better conversion rates.” Fabricio Teixeira, UX Collective

This explains why data is the new gold. But the originality of most companies’ value propositions means there is rarely a robust public dataset readily available to efficiently train their models.

Feedback loop diagram: engagement → signals → data → model training → personalization → filtering → personalization → repeat
An algorithmic feedback loop (source: Maximillian Piras)

Feedback loops & signals

To train a novel model, many companies must act like ouroboros by turning their product into a data collection mechanism that simultaneously uses the results to improve itself. Within this feedback loop, relevant user interactions are tracked as data signals: anything from button taps, gestures, or even an absence of action altogether.

“The fact that you linger on a particular image longer than the rest can imply you have an interest in it. Or the fact that you have started typing something and then turned around and left the field incomplete indicates hesitation.” John Maeda

A well-designed interaction is intuitive but also separates signal from noise.

A smart phone app showing sequential screens being labeled as positive or negative signals.
Illustration by Maximillian Piras

Algorithm-friendly design

The term ‘algorithm-friendly design’ was dubbed by Eugene Wei, a product leader formerly at Amazon, Hulu, & Oculus, to describe interfaces that efficiently help train a model:

“If the algorithm is going to be one of the key functions of your app, how do you design an app that allows the algorithm to see what it needs to see?”

This explains the myriad interactions that exist solely to gauge user sentiment, such as Reddit’s downvoting or Tinder’s card swiping — they’re useless in isolation but very valuable to algorithms.

TikTok’s innovative interface

As artificial intelligence undergoes breakneck advances in accordance with Huang’s law, more elegant design solutions are emerging to evolve the paradigm of providing algorithmic visibility. Today’s most mythical algorithm, TikTok’s, utilized its interface to quickly unlock troves of user data for highly competitive content recommendations. Counterintuitively, it did so by employing one of design’s deadly sins: adding friction.

An animation comparing the algorithmic analysis efficacy of TikTok’s feed with Instagram’s. TikTok provides cleaner signals.
Algorithmic visibility comparison (source: Maximillian Piras)

The design decision to show only one fullscreen video at a time cleanly localizes all signals on how content is received. Compare this to the medley of distractions around content in Instagram’s feed & it’s easy to see the difference in ability to collect good data — which explains Instagram Reels.

In most feeds we can swipe with varying degrees of intensity, allowing us to instantaneously skip past tons of content without telling the algorithm why. This convolutes the analysis:

  • Was this content scrolled past too quickly to register?
  • Was the preview only partially in frame?
  • Was there distracting content above or below?

Constraining the scroll interaction makes it a highly effective interpreter of user sentiment. The real beauty of this solution is its invisible downvote button: a swipe can be cleanly counted as a negative signal when paired with an absence of positive engagement.

Friction removes friction

Although this design decision adds friction initially, over time the opposite becomes true. Improved personalization eventually reduces the amount of recurring actions required, thanks to the compounding interest of good data. In this light the traditional approach actually seems much more cumbersome, as Wei exemplifies with Twitter:

“If the algorithm were smarter about what interested you, it should take care of muting topics or blocking people on your behalf, without you having to do that work yourself.”

A well-designed onboarding flow could easily minimize the perception of upfront friction until the personalization threshold kicks in.

An illustration of an algorithm and a human seeing together.
Illustration by Maximillian Piras

The algorithmic observer effect

As documentaries like The Social Dilemma trend, many are increasingly suspicious of how apps misuse data & manipulate behavior. Awareness of algorithmic gaze is altering user engagement: some people may hesitate to click certain buttons in fear their signals will be misused, while others may take superfluous actions to confuse nosy algorithms.

If users do not trust a product, then a product cannot trust its data.

How to introduce an algorithm

When Cliff Kuang, the former Director of Product Innovation at Fast Company, interviewed the Microsoft team responsible for building AI into PowerPoint, they shared a key realization:

“Unless the human felt some kind of connection to the machine, they’d never give it a chance to work well after it made even one mistake.”

This insight came from comparing fully autonomous virtual assistants with others that took initial direction before providing independent suggestions. It turns out that users trust algorithmic experiences they help train, which makes a lot of sense because our evaluation is often subjective & initial suggestions have less user preference to base off.

Letting people steer initial decisions satisfies our emotional needs while giving a model enough time to train itself.

A diagram of an algorithm transparently displaying its settings to a user, allowing them to make adjustments.
Transparent algorithms lead to user collaboration (source: Maximillian Piras)

Transparency as a strategy

On the a16z Podcast, Wei highlights TikTok’s decision to make their algorithmic weighting public by adding view counts to hashtags & utilizing content challenges. This incentivizes creators, hoping to achieve outsized views, to align efforts with what the service is amplifying. This behavior was once called gaming an algorithm, but the success of this strategy should reverse that negative connotation. If users willingly fill gaps in datasets when their goals are aligned, we should call that collaboration.

Twitter’s CEO is already considering something similar:

“Enabling people to choose algorithms created by third parties to rank and filter their content is an incredibly energizing idea that’s in reach.” Jack Dorsey

If black box algorithms give us filter bubbles (see Blue Feed, Red Feed) perhaps transparent algorithms can burst them.

In conclusion, algorithms still need humans

Spotify’s Chief R&D Officer, Gustav Söderström, spoke with Lex Fridman about setting user expectations for song recommendations. When people are in discovery mode (feeling adventurous enough for questionable suggestions) Spotify leads with machine learning. But in contexts with little margin for error, they still rely on human curators because they outperform algorithms:

“A human is incredibly smart compared to our algorithms. They can take culture into account & so forth. The problem is that they can’t make 200 million decisions per hour for every user that logs in.”

To scale these efforts, they’ve developed a symbiotic relationship called ‘algotorial’ where an algorithm follows a human’s lead—sound familiar? It’s a nice reminder of humanity’s indispensability, as we designers realize that helping algorithms succeed is now part of our job — that is, until they come to take it away from us ;)

The UX Collective donates US$1 for each article published in our platform. This story contributed to Bay Area Black Designers: a professional development community for Black people who are digital designers and researchers in the San Francisco Bay Area. By joining together in community, members share inspiration, connection, peer mentorship, professional development, resources, feedback, support, and resilience. Silence against systemic racism is not an option. Build the design community you believe in.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app