Creating Empathy in an Artificial World

Sherine Kazim
UX Collective
Published in
5 min readFeb 23, 2017

--

Why designers need to move on from personal data to personality data.

Ever since I wrote that piece Emotive UI about designing intention and reaction for the full spectrum of 32 emotions, one thing continues to plague me: empathy. There’s no doubt that the best experience designers are highly empathetic. They have an incredible ability to interpret and relate to users which, in turn, helps them create more engaging interactions. Paramount to these experiences is personalization — always giving the impression that each interaction is unique and specifically catered to that particular user. These days, designing with personal data is table stakes, but what about personality data? Is it possible to design for personality in order to create higher levels of empathy?

The Rise of Relationship Data.

MIT Professor Rosalind W. Picard, wrote about Affective Computing in 1995 and described it as the ability to simulate empathy. Its premise relies on a machine’s ability to adapt and respond appropriately to human emotions. These emotions are derived from human behavior. By behavior, I mean the ways in which a person communicates aspects of their personality, either through implicit or explicit actions.

Typically, behavior and interaction among humans is mostly implicit — passive emotions and expressions. Subtle cues are manifested through voice, gestures, meaning, and language. All of which form a person’s unique personality. If we downplay the implicit piece, and not simultaneously take into account the five senses which help us process communication, we could easily misinterpret someone’s behavior, misidentify their emotion, and ultimately miss a connection. Further, without fully realizing how the data relates to each other and the message, we will assume user intention.

Relationship data stems from our sensory streams working together so we can analyze, understand and emotionally respond to any given situation. For example, if someone uses non-threatening language, while speaking softly and avoiding eye contact, we may infer from those three sensory streams that this person is shy. In turn, we may consider a measured response with non-confrontational verbal and emotional language. If, for whatever reason, we lack confidence in our potential responses, we can seek out more relationship data — content and context — for further analysis and validation.

The Case for a Master Algorithm.

For empathetic experience designers, data sets are our new palettes. In particular, relationship data which helps us develop our human intuition, will be at the forefront of machine prediction. With Apple purchasing emotion-focused startup Emotient, and facial recognition startup Realface, it appears that our design future will emphasize personality-driven data. This is important because having geographic, contextual, demographic, psychographic, and analytics data — the hallmarks of personalization — won’t be enough anymore. Instead, we’ll have to contend with an increased hunger for human data. We’ll continue to see AI materialize on various physical and digital platforms allowing us to determine the user’s emotional state far better than any empathetic designer can do with just user interviews and audits.

To successfully define personality as it relates to communication, designers will now have to combine four different types of behavioral data:

  • Gestural Data. The way we would identify conversational tone via face and hand motions.
  • Physiological Data. The way we would measure heart rate, blood pressure and skin temperature.
  • Facial Recognition. The way we would verify a person, and interpret their emotions and expressions.
  • Deep Learning. The way we would understand speech, and how language is used.

It’s that potent mix of personal and personality data that will give way to hyper-customized experiences. It’s a mix that could ultimately help us determine the user’s intention.

Let’s pretend that we’re monitoring physiological data and we see that a user’s blood pressure spikes a split second before the opening line of a conversation with a customer service rep. We might assume that the customer is upset, but we would still be uncertain as to why or his intention. Is he angry, nervous or pressed for time? Will he yell, punch or intimidate? No idea. For us to understand his intention, we’d have to access a greater portion of his everyday life — everything he interacts with online and offline so we can determine patterns of behavior. All of those data streams would need to be tracked and analyzed so we could get a sense of his big picture. Only then could we organize appropriate communication and responsibly adjust it to fit his personality. Essentially, personality data is making the case for creating a master algorithm.

The Need for Better Technology.

Besides creating the master algorithm, in order for companies to better understand their users, they will need to create emotion databases. This will be time consuming because it relies on someone (yes, a human) to determine facial expressions. It’s highly subjective — literally someone is tagging someone else who is posing and acting out those emotions. That info is then validated by an expert (yes, another human). The issue is that the interpretation is only as good as the actor. It’s difficult to capture spontaneous, dissipation and faint transition of emotions. It’s challenging to understand why, measure how, and guess when they’re about to happen. And, it’s overwhelming for the people tasked with tagging the emotions of thousands, millions, eventually billions, of users.

Second, let’s be honest, the hardware and software for facial recognition just isn’t quite there yet. Ask anyone in law enforcement and they’d be hard-pressed to disagree. While it’s passable for identifying broad characteristics, it will have to get better for us to pick up on the subtleties of expression. China’s Face++ is promising, and if we continue to improve the platforms while combining it with AI, this should prove to be one of the most powerful breakthroughs in technology and an essential to determining personality.

Finally, we’re still mastering natural language when it comes to interacting with devices. For some reason, when faced with a machine, humans talk like a machine. When we talk to Amazon’s Echo, we usually say: “Alexa, [wait for response indicator] “what’s the weather today?” But, when we talk to an actual human near us, we tend to say things like “Hey, what’s it like outside?” No name. No pause. No time. All context is assumed. Interacting with machines is unavoidable, so we need to design them to act and react in a more human-like way — give it unique personalities — ones which compliment our own personality and can adapt to our emotions. When the Mini Cooper car was reintroduced in 2002, one of the most delightful brand experiences was the voice interface. Drivers were able to pick a gender and an accent for how the car’s navigation system would communicate with them. Although the voices were all programmed to give the exact same responses, there was something magical about picking one, about identifying the personality of a passenger that we wanted to join us on our journey. It was a great start, and it’s good to know that empathetic experience designers are still the ones in the driver’s seat.

--

--