Designing for chatbots: Are you human?

Let’s leave the ambiguity at the door and be transparent. Are we talking to a chatbot or a human?

Caroline
UX Collective

--

An android wearing a suit.
Photo by Morning Brew on Unsplash

I can’t have been the only one to fall into this trap. I’m on a website and I need some help, so I click on the chat feature and after being connected I fully believe that I am talking to a human…only to be hit with a repeated canned response and realise…oh…this is a chatbot.

As designers, it might seem obvious to you that your customer is going to be interacting with a chatbot, but they may believe that they are talking to a real person. This can be a hugely jarring and sometimes an embarrassing experience when they discover that they are talking to a friendly piece of code!

In this article, we look at best practices and how to be transparent when designing chatbots.

Way back when…

The first-ever chatbot was called ELIZA and was invented by Joseph Weizenbaum in 1966. This chatbot was one of the first programs capable of attempting the Turing Test (a method of inquiry in artificial intelligence for determining whether or not a computer is capable of thinking like a human being).

Weizenbaum was shocked to observe that many test subjects believed that ELIZA was a human being and would share personal problems and felt emotional when chatting with it.

You’re probably thinking, well that was back in the ’60s, we all know what chatbots are now, so we should be able to spot them. I agree, there is a much wider understanding of chatbots now and with cues, we believe that we can tell them apart from real humans. However, without those cues (or some unexpected ones) it can be difficult to tell. Let’s look into a few of those next.

A conversation between ELIZA chatbot and a test participant.
Source: https://en.wikipedia.org/wiki/ELIZA

We are connecting you…

I’ve seen many interfaces start off a chat scenario by “connecting” the user followed by a short wait. This cue might indicate that we are talking to a human, when in reality, the user has just been connected to a chatbot. As customers, we are used to waiting in queues to talk to advisors. This happens in life when we are queuing up to speak to the cashier in a shop, or when we ring a helpline and get placed in a queue with questionable hold music. All of these experiences lead us to expect that waiting or queuing means waiting for a human interaction.

So when we append a queue system onto a chatbot, we fail to meet the customer’s expectation for that interaction.

A chat window that states “You’re up next! Connecting…”

Say the dirty word. Bot!

There are many terms that I see websites use in order not to refer to the dirty word. “Digital assistant”, “Online helper”, even real-life names that personify a personality but don’t mention that they are a computer program. Talk about misleading right?

We need to be open and honest with our customers and we need to trust them. By deceiving them into believing that they might be talking to a human, they are not able to interact with the experience to their fullest understanding.

We speak to machines very differently to how we speak to humans. With humans, we take care to use niceties and polite terms that we would omit when talking to a machine and instead use more direct language. For instance, I don’t bother asking my Google Home if she had a lovely weekend before asking her what my schedule is for the day. This might seem laughable but if we are not transparent about when our customers are talking to bots, I may well ask the assistant how he is before being hit with the embarrassment of realising… I just tried to ask the computer how it was feeling.

Embarrassment is never a good emotion to evoke in a customer and can lead to annoyance and irritation.

In a worst-case scenario, the customer may have a serious problem that they believe is being addressed by a human only to find out that they have wasted time and now need to once again reach out to find human help.

So just say the word. Trust your customers and let them know that they are talking to a chatbot. They will use it for it’s intended purpose and have better results knowing how to communicate with it.

A chat conversion between a user and a chatbot where the user thinks that they are talking to a human.
When it becomes painfully obvious that you are trying to have a pleasant conversation with a machine.

What’s the problem with all of this?

So what is the problem with this mismatch of user expectations when it comes to chatbots?

Some people might even believe that it is a triumph if you can successfully convince a customer that they are talking to a human. You can save money and man-power by having them to talk to a chatbot and the customer walks away with their answers and feeling like they had a real human moment.

While this might work in theory there are a multitude of ways that it can fail, such as:

  • The customer asks a question that doesn’t have a programmed answer
  • The customer using flowery language which throws off the chat bot’s responses
  • The customer realises that the responses are canned, repeated or sound robotic

If any of these happen then the customer, instead of getting a fuzzy feeling, they may feel like that have been intentionally misled.

Looking to the future…

In the near future, AI will advance to the point where it will be near impossible to tell the difference between human conversation and machine interface. At that point, we will have to look at the advantages and disadvantages of letting our customers know when they are talking to a human or a machine.

But for now, the choice is easy: always let your user know when they are interacting with a chatbot and give them cues that help them understand that they are interacting with a machine.

--

--

UX Designer with 7 years of experience. Making the internet less annoying.