The ethics of persuasive UX design

Natalie Svoboda
UX Collective
Published in
12 min readJun 11, 2020

--

NNewsfeeds that scroll to no end, emails with urgent and non-urgent demands intermixed, notifications that confidently chime anytime and anywhere: these are just a few of the tiresome modern-day user experiences that are the result of a carefully crafted system of design and business decisions. Unsurprisingly, many people are trying to escape the intrusive technology either by removing their devices temporarily for a ‘Digital Detox Retreat,’ or by replacing their devices with more minimal ones, like ‘The Light Phone.’

User Experience (UX) designers’ first challenge in the early ’90s was to make the new digital world as useful (it serves a purpose) and useable (it is easy to interact with) as possible. Soon thereafter, businesses set up a foundation for monetary value: quantifying user engagement by measuring screen time, clicks, and shares — the more attention the user dedicates to the experience, the better it is for the business.

As a result, UX designers were tasked with moving past useful and useable into a larger mission: to keep users hooked to their screens, treating attention as a commodity.

UX designers like myself, therefore, utilize methods that nudge the user to behave in a way that maximizes the attention given to the digital product. Oftentimes these techniques benefit the user (e.g. increasing the likelihood that a user will learn a foreign language) but other times they serve the business instead (e.g. increasing the difficulty to unsubscribe, stop scrolling, or close the page), resulting in a society of burnt out, overstimulated workaholics. As the long-term negative effects of these manipulative experiences come to light, humane technology leaders like Tristan Harris are looking for ways to free ourselves. I believe that to move in a more sustainable direction, we must first understand the underlying mechanics of persuasive UX.

In this article, I review existing works by designers and psychologists to understand persuasive UX design techniques and the behavioral traits they target, organized into three parts:

  1. An analysis of persuasive UX design models
  2. Explanations of exploited cognitive pitfalls
  3. Ideas for the future of persuasive UX design
An iceberg diagram; “Conscious - Thinking” above water and “Subconscious - Feeling: attitudes, motivations, beliefs” below.
Adapted from Freud’s iceberg theory

Models that Guide Persuasive UX Design

Human persuaders (coaches, sales reps, social advocates, trainers, etc.) know that achieving real modification in behavior requires not only a change of thought but also of the underlying feelings. Through the use of various psychological techniques, they seek to modify the subconscious attitudes, motivations, and beliefs to achieve the desired behavior.

As the capabilities of computers grew in the ’90s, B.J. Fogg studied the use of technology in persuasion. He identified many advantages that technology has over human persuaders:

1. Be more persistent than human beings
2. Offer greater anonymity
3. Manage huge volumes of data
4. Use many modalities to influence
5. Scale easily
6. Go where humans cannot go or may not be welcome
— Persuasive Technology, 2003.

It was clear to Fogg that the future of behavior change would lie in the use of technology and he predicted that we would be able to tap into the attitudes, motivations, and beliefs of more people more quickly, efficiently, and powerfully than ever before. It was during this time that Fogg founded the Stanford Persuasive Tech Lab to study and promote technologically-driven behavior change in a new discipline he called persuasive technology.

“I define persuasive technology as any interactive computing system designed to change people’s attitudes or behaviors.” — B.J. Fogg

As the understanding of the capabilities for behavior change grew, so did the methods to achieve this change, as a result of which there are now many models for creating a persuasive user experience. Some UX designers may use them purposefully, but in my experience, many use them unknowingly — through observing the designs of their successful predecessors and copying the patterns. One such model is by Fogg himself.

Fogg’s Behavior Model

Fogg introduced a model that identifies the criteria for when user engagement successfully results in the desired action. It has since been extensively studied and used as a framework for UX designers. His model states that behavior is triggered when the user has high motivation and/or high ability with respect to the task at hand (B=MAT). In the following graph, the curve denotes the minimum product of motivation and ability, such that triggers found above the curve succeed, while the ones below fail.

J.B. Fogg, “Persuasive Technology” (left) // Duolingo, mobile application (right)

In this example (Duolingo):
Motivation level= high (e.g. the user started an exercise)
Ability level = low (e.g. learning a new skill)

This is well illustrated by the UX design in Duolingo, the language learning app. If the user attempts to quit mid-lesson, this teary-eyed owl avatar is an empathy-driven trigger to continue with your session. In agreement with Fogg’s model, even though it is hard to learn a language, you can still successfully prompt the user if their motivation is high enough, engaging them to act on their goals.

A different example is Pinterest, the image curation app. Fogg’s model describes that if the user is not particularly motivated to do something, then it must be easy to do for the trigger to succeed in behavior change. While Pinterest is not often a high priority experience, it is very easy and enjoyable to use, resulting in high traffic and engagement.

J.B. Fogg, “Persuasive Technology” (left) // Pinterest, mobile application (right)

In this example (Pinterest):
Motivation level = low (there is not much urgency)
Ability level = high (it is a mindless activity)

As seen above, the lack of motivation can be compensated for by making the experience easier, or vice versa, though at least some level of both seems to be necessary. Conversely, when the motivation is too low, it may be challenging to sufficiently engage the user no matter how easy the task. This is where the next model comes in.

Nir Eyal’s Hook Model

Another model for persuasive technology is the Hook Model. As described above, the user might need a bit of extra motivation to return to the digital experience. Nir Eyal devised a self-perpetuating system of variable rewards and investments that ‘hook’ the user to always return. Variable rewards — the same method used by slot machines — create a lure as a person seeks out the next pleasurable hit of dopamine. The fact that the rewards are variable (irregularly spaced out) increases the user’s engagement time. In turn, the user feels more connected to something after contributing their time and resources. This investment subsequently creates enough motivation for the next trigger to be successful.

To continue with the Pinterest example, the user is rewarded through the visual stimulation of pictures as well as the social connection with their network of friends. Pinning photos to their account is an act of investment that they would have to give up if they stopped using the platform. As soon as they leave, they are primed for a trigger (via an email or in-app notification) — having sufficiently built up enough motivation to return.

The Hook Model diagram: action > reward > investment > trigger in a figure eight shape.
Nir Eyal. “Hooked: How to Build Habit-Forming Products”

These two complementary models for behavior change show how users are unknowingly manipulated through psychology-driven UX design. To understand how persuasive technology works and the potential to lead to burn-out, however, one must acknowledge the underlying cognitive processes that are being exploited.

Exploiting Weaknesses of Human Cognition

The behavior change models described above build on the fact that we are susceptible to various cognitive pitfalls. Many cognitive scientists have gone to great lengths to observe, test, and document the myriad of ways in which our brains deceive us. Their findings regarding our ability to focus, anticipate, and make decisions have been used as the basis for persuasive UX design. Exploiting these elements of human nature, however, leads to long term negative consequences for the user.

1. Humans prefer flashy stimuli.

Daniel Kahneman is one such scientist who published about how our mind contradicts itself, distorts data, and misleads us. He describes two systems of thought guiding our brains: ‘Fast’ System 1 and the ‘Slow’ System 2. Human minds spend a majority of the time using System 1, which is lazier and automatic, relying heavily on the senses, while System 2 is the less-used rational decision-maker. Marketing, for example, makes a product stand out and catch our attention by appealing to our System 1.

Kahneman thinking diagram “System 1 (intuition & instinct, 95%, unconscious)” and “System 2 (rational thinking, 5%, logical)”
Daniel Kahneman, “Thinking, Fast and Slow”

It is no surprise, therefore, that design has often become ‘flashy’ to appeal to our unconscious preference for loud, bright, and/or moving stimuli. The winning experience is the one that catches the most attention — the more badging, chimes, haptic vibrations, bold hierarchy, movement, and bright colors, the better.

“Every 40 seconds, our attention breaks.” — Gloria Mark

This form of attention-grabbing, overwhelming design, however, severely limits our ability to focus. Professor Gloria Mark found through her research that our attention is broken every 40 seconds in a digital environment. How can we focus on a task when we are consistently lured by pop-ups and banners that are designed to be appealing?

Persuasive UX design exploits our brains’ preference for flashy, hindering our ability to focus while on a digital device.

2. Humans rely on contextual cues for accurate forecasting.

Apart from being senses-driven, our brain is also an anticipation machine. Our developed frontal lobe helps us forecast, and according to Daniel Gilbert, we think about the future more than about anything else. Our brains are constantly taking cues from our current environment to get us ready for the next most likely outcome. This anticipation manifests itself as preparatory changes in our physical and mental state (heart rate, muscle tension, alertness, etc.) while allowing us to otherwise stay calm when nothing unusual is being expected.

Our digital environment, however, lacks most of the cues we need to accurately forecast future events. We don’t hear our friends approaching us, and we don’t see a remote coworker leave for the day. As a result, we are inadequately prepared for most digital situations — our friends might surprise us as their text interrupts our train of thought, or we might inaccurately anticipate that our coworker will need something from us at any minute. The fact that our devices are always on means that so are we — and this depletes our mental and physical resources. Unfortunately, this state of alertness is valuable for businesses, who look to increase our interactions with our screens.

Persuasive UX design makes it difficult to sign out, unsubscribe, or turn on “do not disturb,” increasing our screen time and leading to mental exhaustion.

Illustration of a confused/pensive woman in a yellow dress.
Open-source illustration from undraw.co

3. Humans struggle to make decisions.

A third such weakness in human cognition is our difficulty with decision making. Researchers have noted many predictable decision-making patterns, one of which is a preference for the default option.

Choice architects purposefully design for the default option. Richard Thaler and Cass Sunstein describe this as a ‘nudge’ towards a preferred outcome. Nudges can have a huge impact because most people stick with the default, even in case of seemingly important decisions. While defaults can make experiences easier, faster, and more enjoyable, they can also take advantage of our propensity to “go with the flow.” Accordingly, businesses often make the default action the most preferred one for their own goals.

In the example below, the user has used the Amazon mobile app to attempt to cancel a subscription. Amazon leads with “Confirm” and “We’re sorry to see you go” but the default contradicts these messages, which is to “Keep my subscription.” The hope is that the user will either knowingly or unknowingly still end up choosing the default option, which is being implicitly suggested and even recommended by being the first on the list and visually highlighted. Even though the user can clearly decide for themselves, it is simply easier to go with the default option.

Screenshot of the Amazon UX example, with “Keep my subscription” in primary and “Confirm cancellation” in secondary styles
Amazon, mobile application

Persuasive UX design may display choices in such a way that does not benefit us, altering our decision making often without our knowledge.

Human cognition is easily exploited. Between flashy stimuli, always-on interfaces, and default options, technology can be decidedly persuasive. The adverse effects of these current systems weigh heavily on our productivity, mental health, and trust in the businesses. What might come as a relief, therefore, is a new wave of design ethicists — professionals who acknowledge this mess and want to make a difference.

Minimally Persuasive UX Design

While it may seem like persuasive technology causes more harm than good, some designers show how technology can modify behavior in a way that is much less intrusive and even in harmony with our behavior. This minimally persuasive UX design could dramatically influence our existence but in a non-forceful manner, much like it was originally intended by Fogg in the ‘90s.

This is exemplified by a type of design called calm technology, which aims to prevent the user from being overwhelmed. Amber Case outlines a calm technology checklist, where “low-resolution” updates (visual, auditory, or haptic) respect human communication patterns and reflects the urgency of the information given.

One such example of a low-resolution update is a tea kettle. It starts by whistling softly but builds over time, to match the building urgency for your attention. In contrast, a Roomba vacuum cleaner that completed its task only requires a low-urgency update — it chimes once and recedes without further distraction. What these two objects show is how it is possible to modify the design of the alert such that our attention is protected, allowing us to continue with the task at hand.

Illustration of a tea kettle with the subtext “medium-level urgent” and a Roomba vacuum with the subtext “low-level urgent”
Amber Case. “Calm Technology: Principles and Patterns for Non-Intrusive Design”

“Technology must become more respectful of people’s time and attention — ideally it should disappear into the background.” — Amber Case

Another such advocate for a new application of persuasive technology is David Rose. He imagines connecting ordinary objects in our lives to the Internet to make them “enchanted objects” (eventually creating an Internet of Things). What this could mean is a complete liberation of our current device usage, where every user need is met in the most unassuming and beneficial way possible. The goal is to make our technology glanceable so that it no longer demands precious mental and digital space. Imagine an enchanted umbrella, like the one depicted below, that is connected to the Internet, where its handle lights up when it is about to rain, reminding you to reach for it on your way out the door. This minimal nudge for the user is enough to create behavior change (taking their umbrella) and does not require the use of a push notification on a device that is used for more complex communication.

Image of David Rose’s enchanted umbrella, with light on the handle and digital text along the webbing “rain today, carry me”
David Rose, “Enchanted objects: Design, human desire, and the Internet of things”

As described, some designers are looking to take a more minimal approach to attract the attention of the user. However, these ideas are currently restricted to the periphery, as the attention-grabbing digital products are still omnipresent. Thankfully, a movement to focus on user intention rather than attention is slowly catching momentum. One of the goals of the Center for Humane Technology is to expose how designs hijack our attention and manipulate our choices to change the way technologists build products. The center argues that UX designers should be trustworthy, just like we trust a farmer to provide quality food or a doctor to provide the best treatment.

UX designers have a fiduciary duty to create digital experiences that extend our capabilities and protect us from attention commoditization.

Screenshot of the CHT website landing page (two people on the top of a mountain looking out to the valley below).
Center for Humane Technology, desktop landing page

Designing for a Sustainable Future

As designers, we must notice the patterns that exploit users’ attention, and design for intention instead. Our users will recognize these techniques over time, especially as they are increasingly brought to light. If businesses continue to deceive their users and compete for attention, users will grow tired, lose trust, and eventually strip away their devices. Ultimately, user trust is more valuable than any amount of screen time, clicks, or shares that can be achieved by exploitative UX patterns, and therefore, it is our responsibility to design mindfully with the user’s best interests at heart.

Research conducted in partial completion of the Masters of Arts in Liberal Studies degree with Professor Lorie Loeb at Dartmouth College.

Works Cited

Case, Amber. Calm Technology: Principles and Patterns for Non-Intrusive Design. O’Reilly Media, Inc., 2015.

Eyal, Nir. Hooked: How to Build Habit-Forming Products. Penguin, 2014.

Fogg, B.J.: Persuasive Technology: Using Computers to Change What We Think and Do. Elsevier Science, 2003.

Gilbert, Daniel. Stumbling on Happiness. Vintage Canada, 2009.

Harris, Tristan. Center for Humane Technology, humanetech.com.

Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011.

Mark, Gloria. Constant, constant, multi-tasking craziness. Association for Computing Machinery, 2004.

Rose, David. Enchanted objects: Design, human desire, and the Internet of things. Simon and Schuster, 2014.

Thaler, Richard, and Sunstein, Cass. Nudge: Improving Decisions about Health, Wealth, and Happiness. Penguin, 2009.

--

--

UI/UX Design Leader, Lecturer at Dartmouth, & Yoga Instructor ⭐️ Creating mindful product experiences