crop of abstract painting, ©Angela Madsen 2019

Moving towards ethics: UX designers as radical manipulators

angela madsen
UX Collective
Published in
8 min readMay 7, 2019

--

UX designers should be intrinsically involved in the ethics surrounding what we design. We are in a unique position of already leading the charge on user advocacy, having some clue about the technology involved, and apply cognitive biases regularly (even if it’s not on the tip of your UX tongue).

‘Manipulation’ is a word loaded with negativity, attributable to oily salesmen, self-serving politicians, serial killers, and bad relationships. It is not an easy thing to admit; it’s fairly universal to want to believe we’re good people, and ‘manipulative’ is an invective as best. If we start with the assumption that we may be causing harm and need to prove that we aren’t, we may be moving towards ethics in technology.

Why I call it manipulation

Once upon a time, long ago, in a small town Midwestern community, I convinced my hard-working mother to make a sandwich after she got home from work and fetch it to me at school. A friend saw this in action and immediately named it manipulation.

I owned it at the moment, when my classmate pointed out that she would have never been able to convince her parents to do such a thing. As soon as she said it, I knew that she was right. It was a cultural given that self-sufficiency was the appropriate trump card in any process in the Midwest, and I was entirely capable of either walking home and back, or skipping dinner. It was a kick to my gut and altered my self-perception, and yet the only way I could discount it was to lie to myself. Convincing my mom was already accomplished, the sandwich on its way. Reality could not be dissuaded.

I was a manipulator.

But it occurred to me that it was a benign manipulation, and I held on to that aspect tightly over the next few weeks, as I watched myself and my world for manipulation.

When I looked at my actions through the lens of manipulation, it was nearly omnipresent. And when I trained the same lens on others’ actions, it was there, too. Manipulation was everywhere, in every disagreement and agreement. To not manipulate was to not interact, or to not have a stable point of view.

Skip forward 30 years, and I’m still leveraging that insight about manipulation.

The idea that I am a manipulator sits on my soul every day, as I leverage my understanding of people to create smoother interactions, modify categorization and workflows to better avoid risk and misunderstanding, heighten visual impact to get people to look where I want them to look.

Moving towards ethics

The negative connotations around ‘manipulation’ should not be modified, and I still think it’s an appropriate word to describe what UX designers do.

We are leveraging known behaviors, cognitive biases, available data, and visual design to create experience more in line with goals. That’s manipulation. We are doing it, through software, apps, and websites, on a mass level. That should make us leery of our power, and placing it unquestioningly in the hands of others for core decisions.

UX designers come in many flavors, with many T-shaped specializations and a huge swath of population who are trying to understand, compartmentalize, contain, and make what we do another checkbox. Adding another skill set and potential specialization to the mix seems crazy, yet I think we need to do it before the confusion between UX, UI, IxD, IA, etc. becomes clear-cut, and the command, “UX that” becomes a known quantity and not a major oversimplification.

We can use the confusion to help us become a voice of ethics. We already grasp hard to our mantle of user advocate; it’s just another step to become a person who pushes for ethics in the goals.

UX designers function best when we are facing cognitive biases head on, peeling the skin of rationalization off of behavior and choices, and helping others to see the thin varnish of reasonability applied to their processes. In many ways, we crack perceptions to fix processes.

I don’t think we can skim past the fact that we are all class-A manipulators if we want to arrive at a benign code of ethics. The fact that we knowingly manipulate perceptions in most of the stages of UX design* needs to be acknowledged and built into our ethics.

*Where we don’t? UX research. Good UX research is built on the understanding that leading questions, use of loaded phrases and words, cultures too closely adhered to, etc., skew the results.

It is my contention that transparency is the only way to keep ourselves — and the people for whom we are designing — honest. Remove the spin in the design team and stakeholder documentation. Clear goals, clearly stated, all the negatives acknowledged baldly, is the way forward.

Ethics will change. “Manipulation” may not be a dirty word in 5 years. Proving that data was captured for a known reason and what that reason was may become the saving grace for a company’s misuse — the difference between fines/reparation and dissolution. Coming from the finance industry with strict regulatory oversight, showing that something was questioned and why the decision was made that produced an action can be the difference between a letter/warning and a steep fine. It’s not about never making mistakes, and it never should be; it’s about showing that benign interests were involved. Keeping notes, sketches, etc. in readily accessible locations should be enough — I’m not advocating formal, polished documentation for every step.

If we can keep ourselves honest, maybe we can avoid needing lawyers involved in the design process. To this end, there are a few simple questions we can ask.

What is the end goal?

Because we’re manipulating how others are consuming and digesting the product/data, the goal is very important. By allowing ourselves to accept that what we are doing can be used malevolently, it can help to focus on the goals. The goals should tell us whether we are aiming at being straight manipulative, or benevolently manipulative.

This is covered, at least to a certain extent, when we are writing design problems, goals, and constraints. But it’s a question we shouldn’t stop asking.

Let’s take data capturing as an instance, for fairly obvious reasons (Facebook, Google). Data is not a nothing expense, and it really makes an impact when you start getting enough users to make a profit. It costs in server space, the experiential speed for the user, the mass bandwidth requirements for carriers, and even the ecological load of keeping server farms going. There comes a point where populating or capturing more data is no longer about the user.

For instance, Google maps already populates a huge amount of data, and they’ve started expanding that for the “Explore” bar. Google maps doesn’t pre-populate all the junk at the bottom of the screen when you open the app because they think the user needs it; they were perfectly happy waiting for a search until recently. And the reality is that they made me question my use (despite my love of their functionality) because it annoys me how much longer it takes to load the data I actually want: driving directions.

Who are we benefiting?

Cognitive biases are strong. We leverage them every day, every hour, because we have to sift through so much information. We live in TMI on a cultural level and individual level. If we had to review every experience without any a priori cognition, we’d be even more underwater than we are. We depend on the shortcuts to be able to get things done.

If we’re building a good persona, part of what we’re doing when we’re understanding why they make certain decision is leveraging the particular constellation of cognitive biases that are likely in play. We’re manipulating the persona to smooth out (or add relevant friction to) a process. The understanding can be used negatively or positively.

Leveraging cognitive biases to make a process smoother is helpful for the user. They are on a path and following through. But if that path leads to something that doesn’t benefit the user — say, signing up for a communication stream that will overflow your email box and make it harder to see what the user really needs to see — then we are no longer benefitting the user. We’re benefitting the marketing operation that sells that confirmed email address, the companies that are playing a numbers game to make sales, and organizations (and specifically the decision-makers who helm them) leveraging a stream of individuals who have already proven they are susceptible to following particular cognitive biases.

Where does the money come in?

UX design is not a cheap process. We navigate this daily, whether in our salaries & team construction or in convincing our stakeholders to take the time to invest in UX research. The question will never be if money is making decisions. It’s always in the mix; knowing where helps elucidate where the benefit truly is.

Going back to “goals”, there’s a good chance that Google wasn’t getting the metrics it needed to support advertising in the way that “Explore” data was coming up as previously leveraged (when a user searched while in Maps). To be able to scale up their fees, keep marketing budgets, etc., they had to impose that data on their users, and log the clicks captured from availability bias. They probably made a business decision that accounted for loss of people like me. By not switching immediately, I’m actually proving that the decision wasn’t horrible. Unless we change the basis of our economy so marketing doesn’t have as much sway, it’s a goal I can dislike, but not really fault.

The newsletter signup I described in the last section IS shady, though. The UX is fairly simple, and can be made more or less negative in process. But who we’re benefiting in that particular scenario is absolutely NOT the user, and that should be questioned. As a UX designer, we can ask, and make sure those questions/answers get in the documentation. There is no way to stop someone selling the confirmed addresses, or to whom they would sell, through the UX design. My hope is that, by documenting that the use case was questioned and denied, when the company goes and does it anyway, the documentation would show that they did it knowing there was limited or specious benefit to the user.

As an example of a UX design process that should have caught the unethical qualities of what it was supporting is the kids games where kids can easily pay for ‘extras’ — easier game play, skins, virtual objects, etc. The games were built with kids as the primary persona. To make something easier is a really simple cognitive bias to understand and leverage. Zero friction in that instance was manipulation taken to an immediately unethical level, for the primary benefit of making a steeper profit.

Summary

UX designers are manipulating people as our daily bread. Admitting it is the first step in taking control of it.

Lawyers exist to understand the law, and take an oath to adhere to it. Psychologists exist to understand how an individual person’s mind functions, and takes an oath to do no harm. These are people who spend their professional lives becoming the most knowledgeable person in a given situation about their subject. As the professionals who are most knowledgeable in manipulating how people walk through information, processes, and content, it is incumbent upon UX designers to acknowledge the possible harm they can contribute, and develop our code of ethics accordingly.

I don’t pretend to have a framework in place. I think it’s easier to understand that something smells ethically off than gain consensus with decision-makers why something is off. But if we don’t think about it and talk about it, we accept the status quo. Thinking about our commission in ethics is only one of many steps, but to get there…we have to start.

--

--

eternal work in progress. wrangler of data and empathy, understander of process, seeker of giggles.