After outlining the first steps taken by AI in the article From Alan Turing to John McCarthy, we moved on to the next two decades in History of Artificial Intelligence: machine learning and expert systems to understand the technological evolution that followed.
Now we tackle the 1980s and discover what was new in this decade.
The ‘60s and ‘70s: where were we?
In the history of artificial intelligence, the 1960s are distinguished by studies related to machine learning. Although it had already been invented in 1943, it found fertile ground only in the following years because, at that time, programmable computers were still unknown and no thought was given to using the discoveries made in science in this sense.
In fact, we have to wait for Perceptron, an electronic device created by psychologist Frank Rosenblatt able to show learning capabilities, which in the '60s pioneered new innovations related to machine learning.
In the 1970s, however, expert systems came into existence. An expert is a computer program that uses artificial intelligence methods to solve problems within a specialized domain that ordinarily requires human expertise.
But in this decade we see only a first generation of expert systems, linked to Boolean logic and logical reasoning under conditions of certainty through a deterministic model, which will undergo evolution in the following years.
Second-generation expert systems
In the late 1970s, there was an exponential increase in the use of minicomputers which, being smaller and cheaper, were purchased en masse, especially by businesses.
As a result, the amount of documentation produced begins to grow, requiring tools to quickly organize and consult it.
It was in this climate that expert systems gained prominence as they differed significantly from the procedural method of programming (popular at the time) in their use of "a natural application of the concept of symbolic systems" (Kaplan, 2017).
Jerry Kaplan, in his book "Artificial Intelligence: What Everyone Needs to Know" explains how the expert system works compared to the programming system: "The common approach to programming required that the programmer him- or herself to be an expert in the domain, not to mention be readily available to make changes [...] the concept behind expert systems was to represent the knowledge of the domain explicitly, making it available for inspection and modification."
This has two important consequences:
-
expert systems are "more tolerant to errors, i.e. they tended to 'forgive' programming errors more easily";
-
the structure created allows for a framework "within which the program could 'explain' the way it reasoned."
As we have already said, those of the ‘80s are expert systems of the second generation in how much they have introduced the probabilistic model that, to the difference of that deterministic one, reasons on "cause-possible effects".
However, this model has as many limitations as the one created in the 70s, such as the fact that the most likely answer may not always correspond to the most useful one.
Back-propagation: the learning algorithm for neural networks
In the second half of the '80s is reinvented the backpropagation algorithm, initially conceived by Bryson and Ho in 1969 and related to learning for neural networks: given an artificial neural network and an error function, the algorithm calculates the gradient of the error function with respect to the neural network's weights.
This has permitted the creation of an alternative to symbolic models (used by McCarthy and many others) through connectionist models, which aim to explain the functioning of the mind by using artificial neural networks.
Although they were initially seen as innovative, it soon became apparent that they too were not capable of creating real scientific progress over the others, which is why, rather than alternative models, they were then defined as complementary.
AI winters
In the article about the '50s, we discovered that the term Artificial Intelligence (AI) was invented by John McCarthy in 1956.
Almost thirty years later we see the creation of a new expression related to the topic: "AI winter", used for the first time in 1984 during a meeting of the American Association of Artificial Intelligence.
AI winters, as the name implies, are cold, freezing periods when artificial intelligence sees a decline in funding and research in the field.
One of the first was in the mid-1960s when the US halted investment in artificial intelligence following a period of distrust for this type of innovation.
In 1987 there was another AI winter when DARPA, a government agency of the U.S. Department of Defense, as well as one of the largest funders of artificial intelligence research that only in 1985 had spent 100 million dollars for research in the field, decided to stop investing, choosing to focus only on the technologies they considered most promising.
But fortunately, winters are cyclical, and therefore, a period of freezing will be succeeded by one very rich in investments and technological innovations, as will be the 1990s, in which the commonalities between the real world and the artificial world will increase and the debate between artificial intelligence and human intelligence will reach a turning point.
What can we say about artificial intelligence today? Although it is undeniable that there have been significant advances in algorithms and infrastructure, some argue that we may be on the verge of another AI winter.
Sources: Artificial Intelligence: What Everyone Needs to Know, 2016
Did you miss the previous articles?
1950s: History of artificial intelligence: from Alan Turing to John McCarthy
1960s and 1970s: History of artificial intelligence: machine learning and expert systems
Do you want to know more about Pigro? Contact us!