Saturday, June 19Digital Marketing Journals

“Look, Dave, I can see you’re really upset about this” — emotions and sentiment in chatbot development. | by Joanna Trojak | Mar, 2021


Emotions are an essential part of human conversation, but chatbots kind of suck at it. What can we do to improve it? This post describes how to use IBM Tone Analyzer in the Rasa framework.

Joanna Trojak
Photo by Domingo Alvarez E on Unsplash

Talking to a chatbot, most of the time is like talking to a person with Asperger. It gets simple intents and entities. You should use simple sentences and forgive them for misunderstanding you sometimes. But when you turn a notch a bit and start using sarcasm, irony, and more complicated intents, including emotions like anger, sadness or joy, the chatbot has no clue.

Most chatbot frameworks do not take into account emotions. And why is that? According to an article Utterance-level Dialogue Understanding: An Empirical Study:

Human-like conversational systems are a long-standing goal of Artificial Intelligence (AI). However, such systems’ development is not a trivial task, as we often participate in dialogues by relying on several factors such as emotions, sentiment, prior assumptions, intent, or personality traits.

This observation can be further observed in the graph presented in the paper. P illustrates the speaker’s personality, S represents the speaker’s state; I is the intent. E is the emotional state, and U is the observed utterance.

Taken from source [1] p.1

The authors state that:

Speaker personality and the topic always influence these variables.

At a given time t the speaker uses several pragmatic concepts such as argumentation logic, viewpoint, and interpersonal relationships, which are broadly referred to as speaker state [1]. The pragmatic concept can be, for instance, sarcasm.

According to Rachael Tatman, we cannot understand sarcasm on the sematic level but instead on the pragmatic level when the broader social context is applied [2]. The intent is formed based on the current speaker’s state and the speaker’s previous intent. Intent and speaker’s state influence his or her emotions. Those three elements combined represent the speaker’s utterance.

Currently, the most popular chatbot’s application is customer engagement and marketing. Although bots do great in the FAQ when the rules are clear and straightforward and formed in the question-answer form, real customer engagement requires higher social skills.

Customers want empathy and understanding. Rachael Tatman observes that we tend to mirror people in conversation when we like them [2]. Chatbots could recognize the sentence’s sentiment, and based on the predicted emotion, change the dialogue.

For instance, a user is anxious about his laptop is not working. The chatbot perceives that and tries to reassure them. It uses sentences like “Everything will be all right”, “I’m sure everything will be fixed”. This exchange shows to the user that the bot is empathizing with him and engages in the conversation.

IBM Tone Analyzer can help in recognizing emotions in text. It can distinguish between anger, sadness or tentativeness. According to the IBM website, we can use Tone Analyzer in social listening. It can help in customer support, and it can be integrated with chatbots.

At first, I have integrated Tone Analyzer into custom actions. I have developed a small python module where I put the code provided by IBM API.

API is simple to use. I only have to pass a text to it, and I get the predicted emotion and the score, a probability of this emotion being recognized correctly.

This method is used in the custom action file where I have action_mood, which predicts a mood when a user tells a chatbot about their problem. The chatbot offers reassurance and some solution to the problem.

Our library returns the predicted mood and based on that mood. We set slots that we can use in the following intents with that particular user. The API may return more than one mood of the user’s text. This is the reason why we have a first and secondary mood. If one of that moods is, for instance, fear, we try to make the user less afraid. Later we check the problem slot, which gives us the type of problem the user is facing. So after emotional reassurance, we provide some advice to the user.

This approach is very primitive, but it offers you a quick solution to incorporating emotions into your chatbots.

  • The human conversation consists of many different components, including emotions, sentiment, prior assumptions, intent, or personality traits.
  • Current chatbot development focuses mainly on intents and entities.
  • A key to understanding sarcasm is not semantic but pragmatics because we need broader social context to understand a subtext.
  • Customer service chatbots should have some degree of empathy and should be able to mirror a customer’s style to make themselves more likeable.
  • We can incorporate IBM Tone Analyzer to understand emotions from the text and use it in Rasa custom actions as a quick shortcut to provide better quality chatbots.

[1] D. Ghosal, N. Majumder, R.Mihalcea, S. Poria Utterance-level Dialogue Understanding: An Empirical Study (2020) arXiv e-prints

[2] Rasa Reading Group: Utterance-level Dialogue Understanding, An Empirical Study (Youtube video)

Leave a Reply