<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=747520772257475&amp;ev=PageView&amp;noscript=1">

History of AI: Deep Blue and Strong and Weak Artificial Intelligence

The innovations of the 90s between debates on strong and weak Artificial Intelligence and the Deep Blue victory against the world chess champion.

We are on the fourth installment of our journey inside the history of Artificial Intelligence. We have outlined the first steps taken by AI starting with Alan Turing and John McCarthy, we moved on to the next two decades with machine learning and then to the 1980s of expert systems and the winters of AI.

Now we discover the 1990s and the technological evolution that followed.

The 80s: where were we

After the advent of the expert systems of first-generation, at the end of the '70s, with the following decade come those of second-generation and the introduction of the probabilistic model that, unlike the deterministic model, reasons on "cause-possible effects". 

But this is certainly not the only novelty of the '80s: in fact is reinvented the algorithm of backpropagation, initially conceived by Bryson and Ho in 1969, and related to learning for neural networks.

This has allowed creating an alternative to symbolic models (used by McCarthy and many others) through connectionist models, which aim to explain the functioning of the mind by using artificial neural networks.

But they weren't just years of innovation. There were also what are called "AI winters", more or less long periods in which there is a decline in enthusiasm for artificial intelligence and, consequently, investment in the field.

One of these happened in 1987 when DARPA, a government agency of the U.S. Department of Defense and one of the largest funders of artificial intelligence research that had spent $100 million on research in the field in 1985 alone, decided to stop investing, choosing to focus only on technologies that had better promise.

Kasparov vs Deep Blue: chess against computer

 

deep-blue-

Tom Mihalek/ANSA

 

It is 1996 and a chess match is being held in Philadelphia. One of the two players is world champion Garri Kimovič Kasparov, who is known for being the youngest person ever to win the title, at 22 years and 210 days.

Up to here, nothing special, except that the other player "Deep Blue" is a computer, designed by IBM to play chess.

The challenge is won by Kasparov but the revenge does not delay to arrive: the following year in fact Deep Blue, after an update, succeeds in overcoming the world champion, winning the victory.

The original project of playing chess against the computer dates back to the previous decade, 1985, when student Feng-Hsiung Hsu designed a chess-playing machine, called ChipTest, for his thesis.

In 1989 this project was joined by Murray Campbell, his classmate, and other computer scientists, including Joe Hoane, Jerry Brody, and CJ Tan. 

The chess player opened the way to a wide range of possible fields of use: the research allowed developers to understand the ways in which to design a computer to tackle complex problems in other fields, using in-depth knowledge to analyze a greater number of possible solutions

Such a revolutionary victory invariably generated a lot of criticism about what human supremacy over machines meant and what it entailed.

There was also an attempt to downplay the event, focusing primarily on "the role of the supercomputer designed for the task rather than the sophisticated techniques used by the team of programmers" (Kaplan, 2017).

Weak AI and Strong AI

Already known to scholars, the debate between weak AI and strong AI became even more heated in the 1990s.

The human mind began to be seen as something programmable and therefore replaceable by a machine.

Let's look together at the characteristics of weak and strong AI and the main differences.

Weak AI

Weak AI was born with the goal of creating systems that can successfully act in some complex human function, such as machine translation of text or mathematical problem-solving.

But the goal is not to equal and surpass human intelligence, but rather to act as an intelligent subject, without it matter if it really is.

The machine in fact is not able to think autonomously, remaining bound to the presence of man.

Strong AI

According to John Searle, philosopher of language and mind, "the computer would not just be, in the study of the mind, a tool; rather, a properly programmed computer is really a mind."

Strong AI, in fact, refers to a rational agent capable of performing the same operations as humans and solving problems autonomously, with a level of intelligence equal to or greater than that of humans

The technology used is that of expert systems which are defined by Jerry Kaplan, in the book "Artificial Intelligence. What Everyone Needs to Know": "The common approach to programming required that the programmer him- or herself to be an expert in the domain, not to mention be readily available to make changes [...] the concept behind expert systems was to represent the knowledge of the domain explicitly, making it available for inspection and modification."

The expert system operates in three steps

  • rules and procedures: necessary for the system to function;

  • inferential engine: algorithm that simulates human reasoning; 

  • user interface: where human beings and machines communicate.

But if knowledge can be "taught" to the machine, can it replace the human being?

 

Did you miss the previous decades?

The 1950s: from Alan Turing to John McCarthy

The 1960s-70s: machine learning and expert systems

The 1980s: expert systems and the winters of AI

 

Do you want more information about Pigro? Contact us!