An increasing number of companies are expanding their customer service with chatbot solutions based on artificial intelligence. But how do customers actually perceive this “technological” communication? Do they sense the artificial intelligence behind it? And do they find it authentic and helpful to interact with? A new study provides answers.
In normal conversation, there are helpful clues that indicate how a statement should be interpreted. Facial expression, gestures, tone of voice and volume, for example, help indicate whether “Isn’t that great” should be interpreted as praise or sarcasm. A large part of communication today takes place without seeing or even hearing one’s conversation partner. Meanwhile, chatbots are becoming increasingly capable. For example, they are able to recognize annoyance in a spoken or written statement and respond to it. But do they respond correctly?
The study “A Nice and Friendly Chat with a Bot: User Perceptions of AI-Based Service Agents” by Nancy V. Wünderlich and Stefanie Paluch is one of the first studies to investigate the role of artificially intelligent customer advisers. The two scientists hone in on three questions:
How do customers perceive customer service provided by artificial intelligence?
Is it important to them whether customer advisers are human or artificial?
How authentic do they consider the communication – and how does that influence their behavior?
What is important to customers
In an initial study, participants were asked about their experiences with chats via computers and smartphones, as well as virtual assistants, such as Siri. In a further study, participants were asked to interact with a customer adviser whose identity was unknown. The participants were observed and continually asked to express their feelings. Both studies provide valuable insights into what is important to customers in a customer service situation.
In the interviews, participants repeatedly expressed uncertainty regarding the human or artificial nature of the customer adviser. This was difficult for users to determine, and the resulting uncertainty was a source of discomfort. Most of the respondents wanted to know who they were dealing with. Many made it their mission to find out.
As a result, they considered the interaction to be less authentic when they thought they were communicating with a chatbot. This perceived authenticity in turn influences how helpful and satisfactory customers consider the customer service to be. One participant expressed that a computer tends to give “standard answers”, while another wanted to know that their concerns were being taken seriously. The perceived authenticity therefore has a positive effect on the behavior and attitude of the user.
Man or machine? Identifying the clues
Users determine the nature of the customer adviser in two different ways: using agent-related and communication-related clues. Agent-related clues provide visual signals (images), audible signals (such as voice modulation) and identity indicators, such as a name. If these signals are available, they are used to gage the authenticity of the agent. If the vocal response does not match the users’ expectations, they feel betrayed. “I was not sure if I had spoken with a human or a machine,” said one participant in the study. “The answers were okay, but the voice sounded strange and mechanical. To be honest, I was hesitant to ask [for their identity].”
Users want to get to know the customer adviser a little more and obtain additional information, such as a name or picture. This is particularly helpful in a chat situation, where there is no voice to provide an audible signal in order to help build a stronger relationship with the customer adviser and evaluate their competence.
The second type of clue that customers use in a customer service situation is communication-related: such as sociability, interactivity and being personally addressed by the adviser. Inappropriate emotion, empty phrases or impoliteness were understandably evaluated negatively. Colloquial language, however, was appreciated by users as it indicated a human conversation partner. Users want to be treated as valuable customers and to receive attention and rapid support.
But even context-related factors have an impact on how users perceive the authenticity of their conversation partner. These factors include the user’s self-confidence, their desire for personal contact, and the customer service situation itself: For example, the customer’s evaluation of the adviser is more important in a banking situation than when searching for a product in an online shop.
The study “A Nice and Friendly Chat with a Bot: User Perceptions of AI-Based Service Agents” is only the first step in a long process of research and analysis. As well as conducting a large-scale quantitative survey of chatbot users, the authors have also tested various forms of chatbots, such as avatars, via experimental studies to determine their effect on perceived authenticity. The recent study is available here.