While people are used to paying attention to the tone and meaning of the words that have been heard, developed AI-based algorithms are being taught to detect connections between words themselves and to see the irony and intentional lies.
Now intelligence officers/ agencies are ready to prove the ability of artificial intelligence to detect the human art of sarcasm. The tool, funded in part by the U.S. military, lets them analyze the social media content (including comments, posts, forum talks) while ignoring posts that shouldn’t be taken seriously.
A couple of scientists from the University of Central Florida conclude the following: “Сertain words in specific combinations can be a predictable indicator of sarcasm in a social media post, even if there isn’t much context”.
To simplify, there are some words — “triggers”, that get more attention from the system and help it to identify sarcasm. Thus, words such as ‘just’, ‘again’, ‘totally’, or exclamation marks make it easier to recognize the suggestion of sarcasm in text. The algorithm learning base datasets include posts from social networks (Twitter, Reddit), dialogues, and headlines from The Onion and other news sources.
So, we have here one of the most difficult forms of communication. Moreover, we`re dealing with the text format, which eliminates the voice cues.
How can an algorithm handle this?
Obviously, only neural networks can handle this.
In a nutshell, AI is trained to give more weight to some words than to others, depending on what other words appear nearby. Practical significance is in helping the military to understand what’s happening in key areas where they might be operating.
1. How Conversational AI can Automate Customer Service
2. Automated vs Live Chats: What will the Future of Customer Service Look Like?
3. Chatbots As Medical Assistants In COVID-19 Pandemic
4. Chatbot Vs. Intelligent Virtual Assistant — What’s the difference & Why Care?
The work was supported by DARPA (the Defense Advanced Research Projects Agency). They initiated and launched the program to seek a “deeper and more quantitative understanding of adversaries’ use of the global information environment”.
Scientists are now trying to take into account all the results of previous attempts to recognize human emotions. In previous releases of the podcast, we have been discussing the brightest of them.
This attempt was preceded by a more in-depth study, which improved on the previous ones. The main difference lies in the approach to the search for trigger words.
Earlier algorithms were developed to seek words suggesting specific emotions or even emojis. As a result, they skipped expressions that did not contain them. Consequently, most sarcastic comments were skipped.
Neural networks were also used before and tend to perform better. However, it was still impossible to define for sure how the neural network reached the conclusion that it reached.
Finally, research has gone a little further and AI has come to understand us a little better. What “challenge” will be next in this direction? Is it possible to teach algorithms how to analyze slang and spoken words soon? Can it also predetermine the mentality of the commentator depending on his remarks?
And, on a more personal level, do you think the AI can detect your sarcasm?
Also available in audio format here.
Created by Zfort Group.