Skip to content

History of AI: Deep Blue and Strong and Weak Artificial Intelligence

Pigro - Deep Blue and strong and weak AI the story of AI in the 90s-Full

We are on the fourth "episode" of our journey inside the history of Artificial Intelligence. We have outlined the first steps taken by AI starting with Alan Turing and John McCarthy, we moved on to the next two decades with machine learning and then to the 1980s of expert systems and the winters of AI.

Now, we explore the 1990s with a special focus on Deep Blue and its unforgettable victory against Gerry Kasparov, and the technological evolution that followed it.


The 80s: where we had left off

 

Following the emergence of first-generation expert systems in the late '70s, the subsequent decade ushered in the era of second-generation systems and the introduction of a probabilistic model that, in contrast to the deterministic model, considers "cause-possible effects".

The '80s were not only marked by this innovation but also saw the reinvention of the backpropagation algorithm, originally conceptualized by Bryson and Ho in 1969, and utilized for training neural networks.

This development paved the way for an alternative to symbolic models (as favoured by McCarthy and others) in the form of connectionist models, designed to elucidate the workings of the mind through artificial neural networks.

However, amidst these advancements, the '80s also witnessed what are known as "AI winters" – periods of diminished enthusiasm and investment in artificial intelligence.

One such period occurred in 1987 when DARPA, a prominent funder of AI research within the U.S. Department of Defense, halted its investments, opting to concentrate on more promising technologies.

 

Kasparov vs Deep Blue: play chess against a computer

 

In 1996, a historic chess match took place in Philadelphia. On one side, we had the World Chess Champion Garry Kasparov, renowned for becoming the youngest person to achieve this title at just 22 years and 210 days.

But what made this match truly special was his opponent, "Deep Blue", a computer meticulously designed by IBM for the sole purpose of playing chess.

While Kasparov emerged victorious in this particular showdown, a rematch was inevitable. The following year, the upgraded IBM supercomputer Deep Blue outmanoeuvred the world champion, marking a groundbreaking moment in the history of AI.

The origins of this human-computer chess challenge can be traced back to 1985 when student Feng-Hsiung Hsu embarked on designing a chess-playing machine named ChipTest for his dissertation project. Joining forces with his classmate Murray Campbell in 1989, along with computer scientists Joe Hoane, Jerry Brody, and CJ Tan, they set the stage for this iconic battle.

Following Deep Blue's triumph in 1997, the technology behind the supercomputer was leveraged beyond the realm of chess. Its architecture was applied to financial modelling, marketplace trends analysis, risk assessment, and data mining, unveiling hidden patterns and relationships within vast databases.

This monumental victory not only significantly advanced the capabilities of AI systems but also sparked debates on the implications of human-machine competition and the evolving landscape of artificial intelligence. Critics attempted to downplay the achievement by focusing on the machine's design, overlooking the intricate techniques employed by the team of programmers behind Deep Blue.

 

Weak AI and Strong AI

 

Scholars were already familiar with the ongoing debate between weak AI and strong AI, which intensified in the 1990s. During this time, the concept of the human mind as programmable and potentially replaceable by machines gained traction. Let's now delve into the characteristics of weak and strong AI, exploring their key differences.

 

Weak AI

 

Weak AI is a type of artificial intelligence that operates within a specific or limited scope. AI systems equipped with weak AI excel in performing complex tasks such as text translation or mathematical problem-solving, tasks traditionally carried out by humans.

The primary goal of weak AI is not to surpass human intelligence, but rather to focus on executing actions intelligently. Despite this, these systems lack human consciousness and rely on human presence for operation, as they are unable to think independently.

Examples of weak AI include virtual assistants like Siri, Alexa, and chatbots.

 

Strong AI

According to philosopher of language and mind, John Searle, the computer transcends its role as a mere tool in the study of the mind. A properly programmed computer embodies a mind of its own.

Strong AI encompasses a rational agent capable of mimicking human operations, and autonomously solving problems with intelligence equal to or surpassing that of human beings.

This technology leverages expert systems, as defined by Jerry Kaplan in the book "Artificial Intelligence: What Everyone Needs to Know." The common programming approach necessitates the programmer to be an expert in the domain, readily available for modifications. The core concept behind expert systems lies in explicitly representing domain knowledge, enabling inspection and modification.

The expert system operates in three steps:

  • rules and procedures: necessary for the system to function;

  • inferential engine: an algorithm that simulates human reasoning; 

  • user interface: where human beings and machines communicate.

But if knowledge can be "taught" to the machine, can it replace the human being?

 

In case you missed the previous episodes on the history of Artificial Intelligence:

The 1950s: from Alan Turing to John McCarthy

The 1960s-70s: Machine learning and Expert systems

The 1980s: Expert Systems and the Winters of AI

 

Do you want more information about Pigro? Contact us!