Cognitive biases in user research — 2

“Language is a form of human reason, which has its internal logic of which man knows nothing.” — Claude Lévis-Strauss 1908–2009

Abdou Ghariani
UX Collective

--

Following our first part, introduction to ‘cognitive biases in user research’, here is the part 2 focusing on some examples seen during user interviews.

Camera dei giganti — Giulio Romano

I worked recently on a project at Pivotal Labs’s Paris Office which gave me several opportunities to explore how our unconscious bias affects projects. While these biases can be small and insignificant, if you combine their effects, then they can move your product in the wrong direction.

Each of us has seen these biases before, many times, in many different contexts, with different outcomes, but they share the same root causes. Sometimes, we, as designers, fall victim to these biases. Other times, the shoe is on the other foot, and the user we are interviewing demonstrates their own biases.

Biases can be identified simply as being motivated by two factors: self-protection and improving efficiency in taking decisions. In an user interview setting, both the interviewer and the user bring their own biases into the room. In general, as designers, we need to remove our biases from our conclusions, and design around the biases our users exhibit.

Based on this Cognitive Bias Cheatsheet, biases can come from these 4 major causes :

  • Too much information
  • Limited context
  • Limited time
  • Limited memory

Too much information :

Confirmation bias :

It is a phenomenon wherein decision makers have been shown to actively seek out and assign more weight to evidence that confirms their hypothesis. Victims of this bias will routinely ignore or discount the importance of evidence that invalidate their hypothesis.

Designer : “What do you think about this simple navigation bar ?”

User : “Clear and easy to use I think. Simple, as you said.”

This leads the user to reply with a positive statement in order to avoid being harsh or negative.

Here is an extension of the confirmation bias :

Designer : “What do you expect when you click here ?”

User : “I think a windows pops-up but…”

Designer : “Very good. Good answer. let’s move on.”

Jumping in and shortcutting the user is a mistake we have frequently seen. Being comforted with the fact that our solution has been ‘validated’ is misleading for the team. The term ‘validated’ shouldn’t be the one used when we approach user interviews anyway.

It doesn’t always feel normal, but leaving some moments of silence for our user to think and process their own thoughts is a good reflex. We should avoid trying to rush interviews, or ask too many follow-up questions before our user has completed their thoughts. A best practice is to only continue asking questions when our user has completely stopped talking.

Anchoring effect :

Anchoring or focalism is a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered (the “anchor”) when making decisions. During decision making, anchoring occurs when individuals use an initial piece of information to make subsequent judgments.

The first item we see on a webpage, or menu will frame everything we see after it — it is quite difficult for us to consider all options equally once we have seen the first and had our decision “anchored”. On a restaurant menu, you will often find an expensive bottle as the first item to influence our perception. Then, the second bottle seems cheap even if it is actually expensive because the first value has anchored our expectations.

“Compared to other solutions in the market, especially the street ad I saw from X, yours is too expensive. Even with all these additional attributes and options, I don’t feel I’m gonna choose yours.”

On our project, we found that changing this user’s perspective was too costly. Instead of trying to work around this bias, we decided to pivot and instead focused on helping potential users better understand the service offerings, so that they would be more ready to make a decision.

Framing effect :

The framing effect is an example of cognitive bias, in which people react to a particular choice in different ways depending on how it is presented; e.g. as a loss or as a gain. People tend to avoid risk when a positive frame is presented but seek risks when a negative frame is presented. Gain and loss are defined in the scenario as descriptions of outcomes (e.g. lives lost or saved, disease patients treated and not treated, lives saved and lost during accidents, etc.).

One of the famous examples is the ‘asian disease’ experiment by Tversky and Kahneman which became known as the ‘prospect theory’ and can be used to explain the loss aversion bias as well.

One way we used this to our advantage was to use numbers instead of percentages to explain to users how much money they would save.

“Want to change your provider ? Join today and get reimbursed for 100% of your current providers fees.”

This sentence could be more powerful by taking advantage of this bias :

“Want to change your supplier ? Join us and avoid paying 200€ of switch fees. It will cost you 0€.”

Limited time :

Loss aversion :

In economics and decision theory, loss aversion refers to people’s tendency to prefer avoiding losses to acquiring equivalent gains.

This bias explains why we tend to stay in a movie theater, even if the movie we are watching is awful. When we think about leaving, we confront the fact that we’re “losing” money we already spent. This closely linked to what we call the “sunk cost fallacy” which is amounts to letting unrecoverable costs influence your current decision-making.

During our project, we heard someone say :

Users, when considering a solution switch, prefer to stay with their curent supplier even if they are highly unsatisfied (low service quality, costly price etc…) rather than choosing our product. They are afraid of the potential risks in switching, such as their new service provider having an unplanned outage.

Even if we could demonstrate to our users that this risk was low, this made it very difficult for us to convert users to the service we were offering.

Users will hardly tell you this upfront. We may miss it because we are blind to our own biases. We may understand that the expensive price is the problem to solve and thusly waste weeks on the wrong solutions.

Limited context:

Bandwagon effect :

It is a phenomenon whereby the rate of uptake of beliefs, ideas, fads and trends increases the more that they have already been adopted by others. In other words, the bandwagon effect is characterized by the probability of individual adoption increasing with respect to the proportion who have already done so. As more people come to believe in something, others also “hop on the bandwagon” regardless of the underlying evidence.

Presidential elections are an eloquent example of this effect : many citizens chose to vote for the candidate who has the greatest chance of winning (after he is showing success in polls) instead of voting for their favorite one by conviction.

An example from our project :

User : “The product size matters a lot for most of the people (including my friends) so it is much more important than other options, it should be really prominent on the page, in my opinion.”

This is the exact reason why Pivotal famously does not rely upon focus groups for user research. People end up being influenced by others in the room and they cannot describe their real behavior or preferences.

Limited memory:

False memory :

It is the psychological phenomenon where a person recalls something that did not happen. For example, someone who assisted to a crime and participated to a discussion with two other witnesses may change his story and even strong facts about it.

Loftus and Palmer studied this in 1974 : participants’ recollections of seeing broken glass in a video of the car crash. In the video, broken glass was not present at all. Therefore, any participant who recalled seeing broken glass may have had their memory distorted by the post-event information, that is, the language used.

It had strong consequences for our project :

Designer : “What do you remember from your difficult last interaction with customer service ?”

User : “It has been a long time…It was kind of painful…Not a competent person, without a good understanding of french…probably offshore. Many friends told me they had the same experience.”

The best practice here is to avoid influencing users by sharing other users’ experience. Ask them to describe their own experiences, and not those of others.

Extended consequences :

If we dig deeper, we sometimes find that multiple biases are present in an interaction. Perhaps our own biases trigger a bias in our user, leading to more errors in our reasoning. But most importantly, this makes it harder to discover how far our reasoning is from reality. These effects compound and make it harder and harder to make good decisions unless you pay down the “technical debt” in your interview process and improve your own skills.

It is also highly inadvisable to take your users’ advice as gospel and implement it directly because of what we saw previously. Our task, as designers, is to understand the core pain points and not only solve the shallow problems easily seen from the surface.

For example, if your user says…

User : “I would appreciate having this button on the navigation bar. It will be far better at my opinion.”

We then need to test different solutions against each other. These ideas can (and should) be radically different. We want to be focusing on the causes of anxiety or frustration and not on subjective opinions. Analyzing past experiences is the key to understanding the reasons behind them.

Conclusion :

Biases arise unconsciously in any person and are more difficult to spot when they are coming from us. Our interactions with others are increasingly online, and often communities tend to self-select similar individuals, resulting in a monoculture, lacking diversity of thought. These biases become more and more present and it is more difficult to have rational opinions when we are lacking diversity.

One must think hard in order to be aware our own biases and also identify those of our peers and clients.

Some ways to start improving :

  1. Try to forget your expectations and pre-conceived ideas about situations and people. Try to start from a blank canvas — we don’t know anything, but we can formulate hypotheses.
  2. As an interviewer, stay silent as much as you can. When users explain their behavior and the different steps of their decision making they can take some time to formulate their thoughts. It means that they are focused, looking in their memory and good things usually will come from letting them focus.
  3. Dig deeper into users past experiences, behaviors and context. Ask them how they think and how they continue thinking when they make an important decision.
  4. Take notes about biases you meet along the way. Once we know more about biases, we become more proficient in identifying them, and can teach our peers and coworkers how to do the same.

In the next article we will focus on biases that manifest in the preparation phase before the interviews.

Special Thanks to Tim Jarratt for feedback and help in translating.

--

--