Skip to content

The Evolution of Artificial Intelligence: Expert Systems, AI Winters, and the Battle of Wits

expert-systems

After outlining the first steps taken by AI in the article From Alan Turing to John McCarthy, we moved on to the next two decades in History of Artificial Intelligence: machine learning and expert systems to understand the technological evolution that followed.

Now we tackle the 1980s and discover what was new in this decade.

The ‘60s and ‘70s: where were we?

In the history of artificial intelligence, the 1960s are distinguished by studies related to machine learning. Although it had already been invented in 1943, it found fertile ground only in the following years because, at that time, programmable computers were still unknown and no thought was given to using the discoveries made in science in this sense.

In fact, we have to wait for Perceptron, an electronic device created by psychologist Frank Rosenblatt able to show learning capabilities, which in the '60s pioneered new innovations related to machine learning.

In the 1970s, however, expert systems came into existence. An expert is a computer program that uses artificial intelligence methods to solve problems within a specialized domain that ordinarily requires human expertise.

But in this decade we see only a first generation of expert systems, linked to Boolean logic and logical reasoning under conditions of certainty through a deterministic model, which will undergo evolution in the following years.

Second-generation expert systems

In the late 1970s, there was an exponential increase in the use of minicomputers which, being smaller and cheaper, were purchased en masse, especially by businesses.

As a result, the amount of documentation produced begins to grow, requiring tools to quickly organize and consult it.

It was in this climate that expert systems gained prominence as they differed significantly from the procedural method of programming (popular at the time) in their use of "a natural application of the concept of symbolic systems" (Kaplan, 2017).

Jerry Kaplan, in his book "Artificial Intelligence: What Everyone Needs to Know" explains how the expert system works compared to the programming system: "The common approach to programming required that the programmer him- or herself to be an expert in the domain, not to mention be readily available to make changes [...] the concept behind expert systems was to represent the knowledge of the domain explicitly, making it available for inspection and modification."

This has two important consequences:

  • expert systems are "more tolerant to errors, i.e. they tended to 'forgive' programming errors more easily";

  • the structure created allows for a framework "within which the program could 'explain' the way it reasoned."

As we have already said, those of the ‘80s are expert systems of the second generation in how much they have introduced the probabilistic model that, to the difference of that deterministic one, reasons on "cause-possible effects". 

However, this model has as many limitations as the one created in the 70s, such as the fact that the most likely answer may not always correspond to the most useful one.

Back-propagation: the learning algorithm for neural networks

 

expert_systems_neural_networks

 

In the second half of the '80s is reinvented the backpropagation algorithm, initially conceived by Bryson and Ho in 1969 and related to learning for neural networks: given an artificial neural network and an error function, the algorithm calculates the gradient of the error function with respect to the neural network's weights.

This has permitted the creation of an alternative to symbolic models (used by McCarthy and many others) through connectionist models, which aim to explain the functioning of the mind by using artificial neural networks.

Although they were initially seen as innovative, it soon became apparent that they too were not capable of creating real scientific progress over the others, which is why, rather than alternative models, they were then defined as complementary.

AI winters

In the article about the '50s, we discovered that the term Artificial Intelligence (AI) was invented by John McCarthy in 1956.

Almost thirty years later we see the creation of a new expression related to the topic: "AI winter", used for the first time in 1984 during a meeting of the American Association of Artificial Intelligence.

AI winters, as the name implies, are cold, freezing periods when artificial intelligence sees a decline in funding and research in the field.

One of the first was in the mid-1960s when the US halted investment in artificial intelligence following a period of distrust for this type of innovation.

In 1987 there was another AI winter when DARPA, a government agency of the U.S. Department of Defense, as well as one of the largest funders of artificial intelligence research that only in 1985 had spent 100 million dollars for research in the field, decided to stop investing, choosing to focus only on the technologies they considered most promising.

But fortunately, winters are cyclical, and therefore, a period of freezing will be succeeded by one very rich in investments and technological innovations, as will be the 1990s, in which the commonalities between the real world and the artificial world will increase and the debate between artificial intelligence and human intelligence will reach a turning point.

What can we say about artificial intelligence today? Although it is undeniable that there have been significant advances in algorithms and infrastructure, some argue that we may be on the verge of another AI winter. 

Sources: Artificial Intelligence: What Everyone Needs to Know, 2016 

Human brain versus neural networks: A battle of wits

The ongoing debate between the capabilities of the human brain and artificial neural networks has sparked curiosity and fascination among scientists and researchers. As advancements in artificial intelligence continue to progress, questions arise regarding the true potential of neural networks compared to the complexity and intricacy of the human brain.

While neural networks have shown great promise in mimicking certain aspects of human cognition, they still fall short in many areas. The human brain possesses an unparalleled ability to process vast amounts of information, make complex decisions, and adapt to new situations effortlessly. It is a result of billions of years of evolution, constantly growing and learning from its surroundings.

On the other hand, artificial neural networks have made significant strides in recent years. The development of the backpropagation algorithm in the late '80s revolutionized the field and opened doors to new possibilities. These networks, made up of interconnected artificial neurons, can process data and make predictions based on patterns and correlations. They have proven to be valuable tools in various domains, from image and speech recognition to natural language processing.

However, despite their impressive capabilities, neural networks still lack the depth and versatility of the human brain. The brain's ability to reason, think critically, and understand complex concepts goes beyond what neural networks can currently achieve. Human intelligence encompasses emotional intelligence, creativity, and a deeper understanding of the world, which are yet to be fully replicated in artificial systems.

The battle of wits between the human brain and neural networks raises essential questions about the future of artificial intelligence. While neural networks continue to evolve and improve, there is a recognition that they are complementary to human intelligence rather than a direct replacement. The unique qualities of human cognition, including intuition, empathy, and consciousness, cannot be easily replicated in machines.

As we navigate the possibilities and limitations of artificial intelligence, it is crucial to remember that the human brain and neural networks operate on different principles. Rather than viewing them as adversaries, embracing their collaboration can lead to groundbreaking advancements. By harnessing the power of artificial intelligence while appreciating the intricate complexities of the human brain, we can unlock new frontiers of knowledge and understanding.

In conclusion, the battle of wits between the human brain and neural networks continues to intrigue and inspire. While neural networks have made significant progress, they still have a long way to go before reaching the level of human intelligence. The ongoing exploration of this comparison fuels research and innovation in the field of artificial intelligence, pushing the boundaries of what is possible. As we embark on this exciting journey, the future holds endless possibilities for the coexistence and collaboration of human and artificial intelligence.

Did you miss the previous articles?

1950s: History of artificial intelligence: from Alan Turing to John McCarthy

1960s and 1970s: History of artificial intelligence: machine learning and expert systems

 

Do you want to know more about Pigro? Contact us!