Loading...
JoBot™: News on Psychological Artificial Intelligence
Why do users abuse chatbots?
83

Abuse

Why do users abuse chatbots?



Conversational agents support users in a number of different ways, including task completion. Yet, studies have shown that a significant part of all interactions between humans and conversational agents include abusive language on the part of the user (Kunze, 2019). There is a concern that human-chatbot dialogues provide a training ground for verbal abuse and that there can be a transfer to real life interactions between humans. Hence, it is important to understand the nature of these abusive interactions and how conversational agents contribute to negative responses by users (Diederich, 2021). It is useful to have a classification of abusive behaviours by humans and chatbot strategies to mitigate these behaviours.

A person who verbally abuses a software program may think that no harm is done since the recipient is just the symbol processing machine. Currently, howev-er, harm is indeed done; not to the software of the chatbot but to humans, because the interaction logs of chatbots are monitored for quality assurance.

Certain emotions can enable aggression in interactions while other feelings de-ter negative behaviours. For instance, guilt and shame are emotions that inhibit verbal aggression. Someone who feels guilty is more prone to apologise then to abuse. Chin and Li (2019) studied the verbal abuse of conversational agents. In their definition, “Verbal abuse is a hostile form of communication ill-intended to harm the other person” (Chin & Li, 2019, p.1). Three verbal abuse types (Insult, Threat, Swearing) and three response styles (Avoidance, Empathy, Counter at-tacking) were considered. Chin and Li (2019) studied whether a conversational agent’s response style mitigates the aggressive behaviours of users. Sixty-four subjects were assigned to abuse type conditions and interacted with three conversational agents. The results show that, regardless of abuse type, the agent’s response style has a significant effect on user emotions. Users were less angry and more guilty when communicating with the empathetic agent than the other two agents that used avoidance or counter attach. Based on their results, Chin and Lin (2019, p.6) arrived at recommendations for chatbot design:

First, when users verbally abuse an agent, it is necessary for the agent to ask users about the real intention of their statements, rather than responding to it humorously or providing users with a related search result. Understanding the intent of users and providing a contextual response may allow users to perceive the chatbot as capable, helpful, and enjoyable. Second, if users express negative feelings toward a chatbot, with the intent of abuse, the chatbot should ask users what features of the chatbot make them upset or what situation irritates the user and show a willingness to solve the problem. Our experiment results show that most users have positively assessed the chatbot’s attitude of reflecting on its mistakes and asking user feedback. The chatbot’s empathetic attitude toward users’ angry feelings would contribute to making itself look less mechani-cal while reducing users’ verbal abuse.

Chin H, Li MY, Should an Agent Be Ignoring It? A Study of Verbal Abuse Types and Conversational Agents’ Response Styles. CHI 2019, May 4–9, 2019, Glas-gow, Scotland, UK.

Diederich J, The Psychology of Artificial Superintelligence. Springer Nature Switzerland AG 2021, ISBN 978-3-030-71841-1, DOI https://doi.org/10.1007/978-3-030-71842-8

Kunze L, Chatting with Machines: Strange Things 60 Billion Bots Logs Say About Human Nature. https://youtu.be/KSJ0DMQbN-s 2019