What will happen to designers if there’s no interface to design?

Brain-computer interfaces and the future of design

René Simonsson
UX Collective

--

An illustration of a brain and a computer fusing together.

GGrowing up, I was envious of Matilda Wormwood’s telekinetic powers. From blowing up TV systems out of rebellion against her tyrannical and oppressive dad (played by Danny Devito) to fixing herself bowls of Cheerios for breakfast. 23 years later, researchers have developed the technology that will enable us to control systems through our brains — but what does that mean for designers? How do we design those interfaces? Are they even designable?

How do BCIs work?

The concept of combining mind and machine isn’t new; in fact, it can be traced back to the 1970s when research on BCIs started at the University of California, which led to the emergence of the term ‘brain-computer interface’. BCIs are today invasive; that is, they are surgically implanted in people with medical conditions like paralysis for example, to help them restore basic movement.

But what if there are ways of getting that same detailed data through non-invasive BCIs? That’s where technology is heading today. David Piech, a BCI researcher at UC Berkeley, gives a very thorough analogy of the difference in data generated by invasive BCIs and non-invasive BCIs:

“With non-invasive approaches… one analogy is that you’re watching football. But instead of watching inside — you are outside the stadium; you can actually get a… glimpse at what’s going on in the game, every once in a while you hear the crowd but you only know the biggest events. What’s much more interesting, to me, is if, instead of standing on the outside of the stadium you actually maybe go to the top of one section of the stands and now you no longer get the most gross information from the game. You now may see how this particular section is cheering. Now, if you go even closer- imagine sitting next to someone, and now you can have an actual conversation with them, and now know exactly what their reaction is to every play.” [Source]

A host of startups are popping up in hopes of bringing intelligent non-invasive BCIs to the masses (e.g. NextMind, BrainCo, PlatoScience, CTRL-Labs, Thync, etc). The aim of BCIs is to replace our need for keyboards, mouses, touchscreens, joysticks, steering wheels, and more. In other words, physical user interfaces that designers have designed and perfected for decades.

BCIs work because of the way our brains work; our brains are filled with individual nerve cells called neurons that are all connected to each other. Whenever we think, move, feel — our neurons are working, and their work is facilitated by small electric signals that travel from neuron to neuron. Overall, researchers can detect these electrical signals and interpret what they mean, and use them to control a system. BCIs can’t read your thoughts precisely enough to know what your thoughts are at any given moment. Currently, they’re more about picking up emotional states or which movements you intend to make. A BCI could pick up when someone is thinking ‘yes’ or ‘no’ — sort of in the same way a binary computer works today.

How we interact with systems today

When a user intends on interacting with a system, they undergo several steps. And these steps can be segmented into two different ‘gulfs’; the gulf of execution are the steps between the user’s intentions and actions to be done in order to execute them, and the gulf of evaluation are steps between the user’s perception (or discovery) of a system’s state, and the actual system state. The seven steps are:

Don Norman’s seven stages of action, which is how we typically interact with systems today.

In mature brain-computer interfaces that we will have in the future, the gulf of execution is much shorter; arguably, all users need to do is form a goal and they will instantly get a perception of the system’s state. I think that the implication of this is that UI design will become redundant because artifacts that make up UI design (e.g. buttons, icons etc) all exist in the gulf of execution, as they are used to execute commands. Without those steps in an interaction, those artifacts will no longer be necessary.

Don Norman’s seven stages of action but in the context of BCIs, so it has a shorter gulf of execution

The impact that BCIs will have on designers

So what do UX/UI designers need to think about to design for BCIs? In my opinion, there are three significant factors that come into play:

Diving deeper than the existing human-centered approach

First of all, designers still need to adopt a human-design thinking approach if they want to maintain a human-centered approach to their designs; the human brain is… ultimately human (duh!). If anything, empathizing with users becomes the bare minimum because designers are required to understand user needs and emotions on a neurobiological level. Frustrations won’t arise because a button doesn’t work, but because a system might process a desired goal differently from what the user “thought”. This requires a deeper level of altruism than empathy, which is why I propose a “compassionate design” approach — it aims to move designers from “systems thinking” (more objectively understand how a system is functioning) to “systems sensing” (to “walk in someone else’s shoes”), and to cultivate compassion. And compassion is all about dealing with human paradoxes with care and without judgement.

How can compassionate design be facilitated? It requires a deeper understanding, and that can only happen by introducing more disciplines to contribute and create a fuller picture. Companies need to not only hire traditional designers in their design teams, but all types of people who understand people. That includes anthropologists, sociologists, neuroscientists, technologists, etc. As humans and technology become more interconnected, our approaches to understanding humans and how they interact with emerging technologies needs to be more interdisciplinary too.

New technology = new design challenges

New design challenges will arise with BCIs. One of these hurdles is the lack of intermediate UI input tools (such as hands, voice, screens) to facilitate a command. These are artifacts that designers have designed for decades. What are designers actually supposed to design?

Another BCI design hurdle that designers need to consider is the ease of learning and use of BCIs (which includes factors such as learnability, familiarity, usability and responsiveness). BCIs are completely foreign to the large majority of system users today. Designers need to design BCI experiences that reflect the real world; that are contextual; and that are easy to remember so that users are able to control the world around them through real world commands. What designers should start prioritizing in my opinion is designing guiding systems on how to use BCIs, to ease the transition from HCIs to BCIs.

Ethical implications of BCI’s

Finally, and the most significant design challenge in my opinion, is the ethical considerations of BCI design. BCIs are ultimately capable of reading and writing directly to our minds. And this has several ethical implications. What if a BCI gets hacked and alters the thoughts of a user? What if a BCI is able to implant thoughts into a user’s mind? And what would happen if a system reacts autonomously to a user’s subconscious thoughts and processes these subconscious thoughts to execute an undesirable (or desirable, but not consensual) command? These are harmful but plausible outcomes of BCIs, and designers should be aware of designing BCI experiences that avoid (or if they happen, mitigate) these scenarios.

Evidently, a mindset shift needs to occur in order to prepare designers for BCIs. Having an interdisciplinary, compassionate design approach that is forgiving (as people transition into BCIs) and ethical to users. What tools will BCI designers need to have? I don’t know — but that’s not the point. Whilst non-invasive BCIs are still at their infancy, I think the biggest takeaway of this article as a designer is the importance of evangelizing a humanistic approach to design, and prioritizing usability and user’s needs. With that said, I speculate that the role of a UI designer for example will carry far much less importance; as we let users carry out commands based on a molecular, animalistic and instinctive need, the shade of red of a button suddenly carries less importance.

--

--