Chatbot empathy: How do we think they feel?

One thing we’ve learned from building chatbots (digital assistants, conversational UX and so on) is the success or failure of a bot is rarely a bug or some other kind of software problem.

In fact, it’s not necessarily a failure on bot side of the chat at all. It’s us, the humans in the chat who decide if the conversation works or not. Automated conversations are hard to get right because humans normally only converse with other humans, which means your brain expects all sorts of human behaviors from their conversation partner, it’s never just an exchange of words.

Making a successful bot represents a cognitive puzzle: How do you trick your brain into thinking the bot conversation is a real conversation? We’re not talking about tricking the human into thinking the bot is another human (like the famous Turing Test), but to unlock the potential of conversational user interfaces the chatbot has to satisfy a human brain’s expectations of human conversational behavior. That’s harder than it sounds.

Making a successful bot represents a cognitive puzzle: How do you trick your brain into thinking the bot conversation is a real conversation?

Designing effective conversations demands a technical understanding of how humans use language, but also learning from other instances where we trick our brains into feeling empathy and generating emotional responses to things that aren’t human. We do that all the time when we’re watching TV, reading books or playing video games. The challenge is blending functionality with feelings of empathy if the bot if it’s going to succeed, or else you end up with bot relationships that finish with a “it’s not you, it’s me” end to the conversation, and nobody likes those, do they?

It starts with the strange nature of human language.

Danger sign

Many creatures make vocalizations, but they tend to be simplistic declarations like “danger” or “leave me alone or I’ll bite you”. Human language, however, is unique in the animal kingdom because it’s recursive, i.e. the act of using language is how we work out what we really want to use language for.

You can recognize the unique recursive function of human language easily in your own chats: If you’ve ever said “What I mean is…” mid-way through a conversation (we all do that sometimes, right?) that’s your brain working out what it really wanted to say after it’s actually started the process of speaking.

The part of the conversation before your brain decides to get down to business is usually a set of unconscious, emotional chat reflexes like “Hey, how’s it going?” followed by remarks and banter like “Did you see the game last night?” or “It’s been a long week, right?” (etc.)  Tone and expression also play a role in these unconscious chat interactions, which are a warm-up act before the information exchange gets going with “Anyway, what I wanted to ask you was…” and so on.

It’s all about the neuroscience of empathy

The reason our unconscious emotions play such a big part in the use of language is partly down to empathy. When we chat, we don’t just exchange data, we relate to the person we’re exchanging data with. Without that emotional engagement in conversation, it becomes more like giving and receiving orders in the army. That’s not chat, it’s a different way of communicating altogether.

However, empathy is a very nuanced concept. We often consider the word ‘empathy’ to refer to our ability to relate to other humans in the real world. We’re happy for people, we’re sad for them, we forgive the ones we love and so on, but empathy is also present in a whole load of other things you might not expect, for example, watching TV.

It’s like this: We know TV shows aren’t real life. We know the people on the screen are actors. We know the places they appear to be are just sets, not real. We know they are not in danger (except career danger if the show gets cancelled). We also know the conversations are scripted, not real. It’s a completely artificial representation of reality… so why we feel scared watching horror movies when we’re safe at home on our sofas? There is no logical reason to expect the big name celebrity hero character will die in episode 1, season 1 of that kickass new show everyone is excited about, but we’re still on the edge of our seats when it looks like they’re in danger, right? That’s empathy at work.

Empathy tricks your brain into enjoying the story on an unconscious level, not consciously analyzing the data it’s receiving through your eyes and ears. Without that empathy, we’d watch TV like my dog, who barks at other dogs on TV because as far as she’s concerned, it’s a real dog in a box on the wall in her territory, and she’s not happy about it.

Psychologists and neuroscientists don’t fully agree on the specifics, but it’s reckoned that empathy has something to do with ‘mirror neurons’ – structures in the brain that are stimulated to mimic the feeling of experiences we see other people having. They create sensations that are like watered-down approximations of the feelings the characters in stories are experiencing. Again, test this on yourself. Watch a really good movie action sequence and try to stop your pulse rate from rising. It’s harder than you think.

Empathy

What’s that got to do with chatbots?

Simple. For a chat interface to be effective, it needs to feel like a real chat. And because real chats occur between humans, that means the bot needs to feel human too. A really effective conversation design process needs to design simulated human attributes into bots, to feed our empathy.

Successful bots use language like we do, recursively, to appear like it’s working out what the conversation is for after the chat begins. A good bot design doesn’t assume that because it’s a customer service lost password bot, it should start by asking you “Please authenticate your identity so I can reset your password”. No. It needs to start like we do, by saying “Hello, how are you today” or something human like that. It also needs to replicate the thing that make human conversations interesting, the unexpected remark.

Consider this (simplified) example script:

Bot: “Hello, how are you today?”

<asks about you, your brain recognizes that from human chats.>

Me: “Hi, I’m good”

<empathy response even though I know the bot doesn’t really care how I am>

Bot: <thumbs-up emoji> “So how can I help?”

<recursive language use – it appears to be working out why you are chatting, even though it’s called “password reset assistant” and it already knows. Also, the emoji is unexpected, simulating real banter-style chat.>

Me: “I lost my password”

<back to completing user goal>

Bot: “Have you tried looking under the sofa?”

<You have to be careful with banter, but this joke deepens the feel of a human chat in your unconscious emotional brain>

Me: “I need a password reset”

<repeated answer signals bot needs to get on with it, just like a chatty human assistant>

Bot: “Sure, give me a second to check your account. What’s your email?”

<Simulated thinking – asking for time –  makes it feel like the bot is thinking>

Me: <gives email address>

<bots are great for facilitating data capture>

Bot: “Okay, this won’t take long. Don’t worry, I’ve done it before”

<talks about how you’re feeling, and about itself, more empathy building>

The net effect is to trick your brain into thinking it’s having a conversation with a human, in the same way TV shows trick our brains into immersing ourselves in the story. Stimulating these types of human emotional responses is crucial if the chat is to work, otherwise the conversation feels wrong. It won’t feel a proper chat any more than pressing down the lever on the side of a toaster feels like you’re chatting with it about cooking your toast.

Speech bubbles

Designing empathy into conversations is also a recursive process.

Building a chatbot starts the process of conversation design, but it’s the process of studying the bot’s ability to engage users emotionally in the chat that helps you improve the conversation design in the next iteration. And then that process loops round again, and again. So you could say that designing conversations is really a design process that uses conversation design to improve your ability to design conversations.

It’s not merely a task-focused challenge, it’s an emotional empathy challenge as well.

That might sound like a “whoa, dude” statement but it’s not really, in fact, that’s how most design processes work. It’s the essence of prototyping, versioning and releasing updates. The difference with conversation design is how much of the design process is driven by issues that relate to the human world beyond the core function the software is designed to fulfill. It’s not merely a task-focused challenge, it’s an emotional empathy challenge as well.

Conversation design takes the traditional role of psychology and anthropology that’s ever present in UX research, then adds a big slice of linguistics on the side and moves the design process deeper into the realms of cognitive science than other UX tasks. It’s a highly nuanced process that means you’ll spend more time thinking about the use of language and developing a personality for the bot than you will engineering the software that runs it.

In a way, it means for the UX team that’s used to studying how the user feels during an interaction, they have a new question to ask… “And how does the bot feel about that?”

 

 

Thanks to Labs colleagues Philip Say, Lauren Nham and others for their collective opinion and expertize.

Director of Design Research

Sutherland Labs
Profile Image - Simon Herd

View other blog posts by Simon