The noise around Artificial intelligence (AI) causes a little bit of confusion, especially since so many companies, AI projects, and capabilities have exploded on the scene. Where do we start making sense of it all? Let’s start with a definition.
AI is a sub-field of computer science aimed at the development of computers capable of doing things that are normally done by people — in particular, things associated with people acting intelligently.
Computers and machines can tell the difference between a chair and a table, a cat and a dog and so can humans. This is made possible by AI.
In this part of the series — Preparing for Our Robot Masters, we will explore the two types of AI and what you should know about them.
For some AI researchers, developers and engineers, the goal is to build systems that can act in the same way that people do. This means building systems that think intelligently just like humans.
Others simply don’t care if the systems they build have humanlike functionality, just so long as those systems do the right thing. Alongside these two schools of thought are others somewhere in between, using human reasoning as a model to help inform how we can get computers to do similar things.
Artificial intelligence can be divided into two — weak AI and strong AI. As explained above many can be satisfied when systems exhibit intelligence associated with weak AI, while others aimed to achieve strong AI.
Let’s explore and understand what weak AI and strong are:
An AI system is considered to be strong when it genuinely simulates human reasoning in a way that can be used to not only build systems that think but also explain how humans think as well.
This means that any system that can act like humans, reason like humans, and explain how humans think is strong AI. Remember the movie Terminator? that is an example of strong AI because as a robot it performed and acted just like a human.
It is good to note that genuine models of strong AI or systems that are actual simulations of human cognition are yet to be built.
Any work aimed at just getting systems to work is usually called weak AI in that while we might be able to build systems that can behave like humans, the results tell us nothing about how humans think.
Weak AI is relatively easy to spot in that it fails to truly act or mimic human behavior or intelligence.
One of the prime examples of this was IBM’s Deep Blue, a system that was a master chess player but certainly did not play in the same way that humans do and told us very little about cognition in general.
1. Chatbot Trends Report 2021
2. 4 DO’s and 3 DON’Ts for Training a Chatbot NLP Model
3. Concierge Bot: Handle Multiple Chatbots from One Chat Screen
4. An expert system: Conversational AI Vs Chatbots
There is yet another school of thought that brings both strong and weak AI together.
Balanced between strong and weak AI are those systems that are informed by human reasoning but not slaves to it.
This kind of system simply uses is not driven by the goal to perfectly model how humans behave but use human reasoning as a guide. This tends to be where most of the more powerful work in AI is happening today.
Now if we could only think of a catchy name for this school of thought! I don’t know, maybe Practical AI? A good example is advanced natural language generation (NLG). Advanced NLG platforms transform data into language.
Where basic NLG platforms simply turn data into text, advanced platforms turn this data into language indistinguishable from the way a human would write.
By analyzing the context of what is being said and deciding what are the most interesting and important things to say, these platforms communicate to us through intelligent narratives. An example of this in action is the software called Grammarly, Google Assistant, and Siri.
While most of the terms above may be new to you, we have understood the two types of AI — weak and strong.
It is good to note that while this series is title preparing for our robot masters, it does not mean that humans will serve robots in the future ( I believe that this will never happen). This series is aimed at exploring the progress of AI and what need to know.
The important takeaway is that in order for a system to be AI, it doesn’t have to be smart in the same way that humans are. It just needs to be smart.