Combining AI and behavior science can transform experience

To create AI-centric experiences that work, designers must understand people. By leveraging principles of motivation and understanding the context for behavior, designers can make the most of AI’s promise.

Amy Bucher
UX Collective

--

An abstract image of pixels coming together to make a human face

I’m part of a purpose-driven organization with a core philosophy that empathy is at the center of effective, meaningful design. In an environment where we put a high value on understanding a user’s challenges, it may be surprising that we believe artificial intelligence (AI) could play a significant role in designing a user experience that makes people feel seen, cared for, and yes, the recipients of empathy. While machines can’t truly empathize, they are designed by human beings who can. Designers who bring empathy to the creation of technology can ensure that inserting AI into the user experience does not take away the vital human connection essential to great outcomes.

“Empathy” may seem to be a fuzzy concept. It is about putting yourself into the user’s shoes and trying to deeply understand their perspective. We find that using a behavior science lens to direct empathy can increase its design power. My team’s approach to experience design focuses on defining desired behaviors, and then understanding what might motivate our users — or not! — to do those behaviors. We look at the context in which people live and take action, and how the environment facilitates or prevents behaviors. Then we design to take advantage of motivating factors and overcome demotivating ones. This approach gives us the ability to make a meaningful connection with the intended audience whether that’s a chronically ill patient, a banking customer, or a retail store employee.

AI technology has become both more sophisticated and more common in recent years. As it becomes a bigger part of the design toolkit, it’s critical to consider how AI contributes to the overall user experience. It can be tempting to incorporate AI for its advantages — its scalability, its flexibility, and its promise of enhanced outcomes. But if we do so without considering the larger user context and motivation, we run the risk of missing the mark. Our behavior science lens, rooted in empathy, can help us identify where and how AI can be incorporated into the experiences we design so that they are ultimately successful for both business and people.

Understanding Motivation

Our behavior science lens centers on understanding motivation. To do that, we primarily draw from self-determination theory. Over 40 years of research support self-determination theory’s core tenets: The quality, or source, of people’s motivation matters deeply for its effect on behavior, and supporting people’s basic psychological needs is the key to fostering high quality motivation. The good news is that much of the research on self-determination theory looks specifically at how to design products and experiences that people find engaging.

First, let’s consider motivational quality. The source of people’s motivation matter. Some forms of motivation are externally imposed. These forms of controlled motivation look like the doctor telling you to eat healthier or your parents lecturing you about saving for retirement. You might feel pressured to try a certain behavior (eating better, saving money), but it’s difficult to maintain that behavior when other competing demands arise (pizza night with friends, buying expensive new shoes). In general, controlled motivation can kick-start a behavior, but isn’t enough to sustain it over time.

Contrast with autonomous motivation. These are more internally-generated sources of motivation. An autonomously motivated person can look to their own personal goals, values, or sense of self as a way to drive behavior change. Behaviors become a means to accomplish those goals, live into the values, or be the person they imagine themselves to be. Eating better or saving money become much easier when they’re in service of something like a desire to run around with the grandkids or a dream of retiring to a beach community. When it comes to experience design, we look for ways to tap into autonomous forms of motivation, which often requires getting to know our user. (Good news here — AI can help!)

Diagram showing the names of different types of motivation arranged from most controlled to most autonomous.
The motivational quality continuum ranges from the more fleeting and vulnerable controlled forms of motivation to the enduring and powerful autonomous ones. Diagram by Aidan Hudson-Lapore.

Once experience designers have identified their users’ personally meaningful motivational sources, they can begin to craft experiences that support basic psychological needs. These include autonomy (making meaningful choices), competence (learning and growing), and relatedness (being connected to something bigger than oneself). Effective design includes striking the right balance of making things easy for users and making them effortful, so they can feel a sense of choice, growth, and connection without becoming frustrated or overwhelmed. This is another area where AI has the potential to support good design.

Understanding Experience in Context

Of course, motivation does not exist in a vacuum. It always takes a behavior as an object — people are motivated to do something — and it plays out in a real-world context that can present obstacles, choice points, social pressures, competing demands, and all manner of factors that influence whether the behavior is actually performed and how. Experience designers dig deep on those contextual factors in order to improve the odds that the desired behaviors happen consistently.

We do a lot of formative research on our projects to really understand the specific context in which our specific users might be doing a specific behavior. We all know from being human that context matters a lot. The person who leaves messy dishes on the counter at home may keep a pristine desk at work. We can’t make assumptions about that person’s cleanliness writ large; we have to understand the context and what makes their behavior different in different environments. This is where AI sometimes fails, because the people designing it don’t consider the context that influences people’s behavior.

There are many well-documented failures of AI. There sometimes seems to be a perception that AI is objectively correct because it’s based on algorithms and data, but remember that human beings develop those algorithms and choose that data. When AI fails, it‘s very rarely a technological issue. It tends to be errors on the design side: Failed algorithms, poorly chosen data (that may codify historical patterns of discrimination or other biases), or a lack of understanding of the user’s behaviors in context.

Let’s take the data piece: AI is usually trained using existing or synthetic data sets. Designers need to be scrupulous about selecting training data that accurately describes their users’ situations, or the resulting algorithms won’t be right for that audience. In the benign version of that error, AI is just off and ineffective. In more pernicious versions, it doubles down on harmful biases. For example, women have historically been excluded from scientific research on heart disease. Using a data set that includes only men to train an AI heart health program would likely be much less effective for women users. I’ve included some suggested readings at the end of the article that dig into many more examples of these types of errors and how designers can be alert to them.

Abstract image of multiple streams converging into just one stream
Algorithms tend to strengthen patterns over time, so starting from problematic data further entrenches biases.

We’re traveling down a two-way street: We’re not just training people to use technology, we’re training technology to relate to people. It’s critical to use the right training materials. This may mean investing in deep research at the outset of design, but it will pay off in terms of efficacy and accuracy. Going forward, getting this human-centric design right is the biggest factor in the success or failure of AI-driven transformations.

Algorithm Aversion

As AI has become integrated into more experiences, researchers have identified a persistent issue known as “algorithm aversion.” Basically, people are wary of having an algorithm make decisions for them. Often people feel that it’s not possible for a machine to understand them the same way a human could. As a result, they interpret AI-generated recommendations as impersonal, and not right for them. Even when evidence suggests an algorithm delivers greater accuracy than a human expert, people tend to prefer having human involvement.

That’s one reason why we don’t necessarily look to design technology-only solutions. There can be enormous power in blending a human touch with technology when AI is involved. Live experts can support AI-generated recommendations by providing rationales, tailoring recommendations, or making adjustments based on user feedback. They can reinforce caring and warmth, and bring human empathy into the experience to build trust with the user. Providing emotional connection and caring is important to a successful experience. (It goes back, in part, to that basic psychological need of relatedness.)

One example where Mad*Pow worked to overcome algorithm aversion is in developing a connected health solution called ImagineCare. Developed in partnership with Dartmouth-Hitchcock Medical Center, ImagineCare enables patients with specific chronic health conditions to have 24/7 monitoring and support. The behavioral goals of the program were to help patients better manage their conditions (with specific behaviors for each condition) and for clinical staff to be able to identify imminent health crises and intervene before emergency care is needed. Ultimately, the hope was that ImagineCare would improve patient outcomes and lower medical costs.

The project required a simultaneous focus on the high-tech and analog experiences. Patients received an array of internet-enabled devices to help monitor their health depending on their specific diagnoses. Someone with congestive heart failure might get a Bluetooth-enabled scale, while someone with COPD received a connected inhaler and someone with hypertension, a connected blood pressure cuff. The physical device unboxing included coaching on setting up the smartphone app on which patients could review their data and receive feedback and recommendations. Meanwhile, the data was also sent to a dashboard monitored by Dartmouth-Hitchcock nurse case managers. When the algorithm detected data that indicated a potential health crisis based on a patient’s prior history and diagnosis, the nurse case manager received an immediate alert to prompt outreach to the patient.

Nurse case manager with a telephone headset sitting at a computer monitor showing patient alerts
Nurse case managers using ImagineCare saw bold visual alerts letting them know if any patients needed immediate outreach.

By prompting providers to make contact with patients as soon as data suggested a problem, ImagineCare was able to reduce emergency room utilization and save costs. Specifically, Dartmouth-Hitchcock saw savings of $298 per patient per month, primarily due to reduced unplanned and emergency medical visits. But what was more important in many ways was that the blend of technology and human expertise struck the right tone for patients: ImagineCare received a stunning 95% satisfaction rating from pilot participants.

We Can Make AI Matter

The devil, as they say, is in the details. By using a behavior science lens to understand user needs and the context in which they’ll approach an experience, designers can leverage AI in ways that both support desired outcomes and help users feel understood. AI has the power to collect and analyze data that provides a sharper picture of a user and can facilitate a more effective experience — in healthcare, retail, or financial services. By understanding people, their motivations, and any reservations they may have about AI-driven design, designers have the opportunity to create transformational experiences that deliver outcomes that are better for business and better for people.

Further Reading

Criado Perez, C. (2019). Invisible Women: Exposing Data Bias in a World Designed for Men. New York: Abrams.

Eubanks, V. (2017). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Broadway Books.

Wachter-Boettcher, S. (2017). Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech. New York: WW Norton & Company.

The UX Collective donates US$1 for each article published in our platform. This story contributed to UX Para Minas Pretas (UX For Black Women), a Brazilian organization focused on promoting equity of Black women in the tech industry through initiatives of action, empowerment, and knowledge sharing. Silence against systemic racism is not an option. Build the design community you believe in.

--

--

Chief Behavioral Officer at Lirio; formerly VP of Behavior Change Design at Mad*Pow.