We are on the fourth "episode" of our journey inside the history of Artificial Intelligence. We have outlined the first steps taken by AI starting with Alan Turing and John McCarthy, we moved on to the next two decades with machine learning and then to the 1980s of expert systems and the winters of AI.
Now, we explore the 1990s with a special focus on Deep Blue and its unforgettable victory against Gerry Kasparov, and the technological evolution that followed it.
The 80s: where we had left off
After the advent of the expert systems of first-generation, at the end of the '70s, with the following decade came those of second-generation and the introduction of the probabilistic model that, unlike the deterministic model, reasons on "cause-possible effects".
But this is certainly not the only novelty of the '80s: in fact, the algorithm of backpropagation is reinvented, initially conceived by Bryson and Ho in 1969, and used for training neural networks.
This has permitted the creation of an alternative to symbolic models (used by McCarthy and many others) through connectionist models, which aim to explain the functioning of the mind by using artificial neural networks.
But they weren't just years of innovation. There were also what are called "AI winters", more or less long periods in which there is a decline in enthusiasm for artificial intelligence and, consequently, investment in the field.
One of these happened in 1987 when DARPA, a government agency of the U.S. Department of Defense and one of the largest funders of artificial intelligence research that had spent $100 million on research in the field in 1985 alone, decided to stop investing, choosing to focus only on technologies that had better promise.
Kasparov vs Deep Blue: play chess against computer
It is 1996 and a chess match is being held in Philadelphia. One of the two players is the World Chess Champion Garry Kasparov, known for being the youngest person ever to win the title, at 22 years and 210 days.
Up to here, nothing special, except that the other player "Deep Blue" is a computer, designed by IBM to play chess.
The game was won by Kasparov, but the rematch was not long in coming: the following year, in fact, the IBM supercomputer Deep Blue managed to overtake the world champion and won the match after having been upgraded.
The original plan to have a human being and a computer challenge each other in chess dates back to 1985, when student Feng-Hsiung Hsu designed a chess-playing machine called ChipTest for his dissertation project.
In 1989, his classmate Murray Campbell joined his project together with other computer scientists such as Joe Hoane, Jerry Brody, and CJ Tan.
After the victory at the chess match in 1997, the architecture used in Deep Blue was applied to financial modelling, including marketplace trends and risk analysis and data mining—uncovering hidden relationships and patterns in large databases.
Such a revolutionary victory undeniably generated both a “huge increase in the capability of AI systems” and much criticism about what human supremacy over machines meant and what it entailed.
There was also an attempt to downplay the event, focusing primarily on "the role of the supercomputer designed for the task rather than the sophisticated techniques used by the team of programmers" (Kaplan, 2017).
Weak AI and Strong AI
Already known to scholars, the debate between weak AI and strong AI became even more heated in the 1990s.
The human mind began to be seen as something programmable and therefore replaceable by a machine.
Let's look together at the characteristics of weak and strong AI and the main differences.
Weak AI
Is a type of artificial intelligence that is limited to a specific or narrow area. AI systems programmed with weak AI can successfully carry out some complex tasks such as a translation of text or mathematical problem-solving, which have been usually performed by humans.
Weak AI does not aim to "win" over human intelligence, rather its focus is on the action: acting as an intelligent subject, lacking nonetheless human consciousness: The presence of humans remains binding for the functioning of the machine, which is not able to think autonomously.
Siri, Alexa and chatbots are good examples of weak AI.
Strong AI
According to John Searle, philosopher of language and mind, "the computer would not just be, in the study of the mind, a tool; rather, a properly programmed computer is really a mind."
Strong AI, in fact, refers to a rational agent capable of performing the same operations as humans and solving problems autonomously, with a level of intelligence equal to or greater than that of human beings.
The technology used is that of expert systems, a definition of which has been given by Jerry Kaplan in the book "Artificial Intelligence. What Everyone Needs to Know": "The common programming approach required that the programmer him- or herself to be an expert in the domain, not to mention be readily available to make changes [...] the concept behind expert systems was to represent the knowledge of the domain explicitly, making it available for inspection and modification."
The expert system operates in three steps:
-
rules and procedures: necessary for the system to function;
-
inferential engine: an algorithm that simulates human reasoning;
-
user interface: where human beings and machines communicate.
But if knowledge can be "taught" to the machine, can it replace the human being?
In case you missed the previous episodes on the history of Artificial Intelligence:
The 1950s: from Alan Turing to John McCarthy
The 1960s-70s: Machine learning and Expert systems
The 1980s: Expert systems and the Winters of AI
Do you want more information about Pigro? Contact us!