Pigro's journey through the history of artificial intelligence continues with the 1960s and 1970s featuring expert systems and machine learning.
After having talked about the 1950s and the beginning of what we can call the ”history of AI”, we now look at the next two decades, the 1960s and 1970s, to discover all the secrets of Artificial Intelligence, from its origins until today.
The ‘50s: where we left off
In the article History of Artificial Intelligence: from Alan Turing to John McCarthy, we have outlined the first steps taken by AI, starting from the origin of the expression "artificial intelligence".
The first to use it was the assistant professor of mathematics John McCarthy in 1956, in a workshop that would be held at Dartmouth College in the same year and for which it was necessary to find a specific terminology to differentiate this field of research from the already known cybernetics.
Thus, in the document with proposals for the conference, appears for the first time the expression "artificial intelligence", which from that moment on would label this complex area of research.
In addition to John McCarthy, there are two figures who have distinguished themselves in this decade for bringing novelty and innovation to AI: Alan Turing and Arthur Samuel.
Alan Turing is known for having designed a test that aimed to compare artificial intelligence to human intelligence, known as the "Turing Test" or "Imitation game".
Arthur Samuel, American computer scientist, on the wave of strong enthusiasm for technological evolution generated by the Dartmouth workshop, in 1959, instead, created his "checkers player", a program designed to self-improve up to exceed the abilities of the creator.
He also invented the term "machine learning" to give a name to his technological innovations in the field of machine learning.
The ‘50s, full of changes and innovations, ended with the recognition of artificial intelligence as an independent field of research, thus giving rise to a new definition of technology and laying the foundations for the many evolutions that would follow in the following decades.
The history of AI: the background
With the invention of the term "machine learning" in the late '50s, Arthur Samuel had already identified two distinct approaches: the neural network approach and the specific approach.
The first, the neural network approach, leads to the development of general-purpose machine learning through a randomly connected switching network, following a learning routine based on reward and punishment (reinforcement learning).
The Specific approach, instead, as the name implies, leads to the development of machine learning machines only for specific tasks. A procedure that, only through supervision and reprogramming, reaches maximum efficiency from a computational point of view.
But what are neural networks? In the book Sources: Artificial Intelligence: What Everyone Needs to Know, Jerry Kaplan defines them as "mirrors of their own experience. In this sense, they do not 'learn to do something' in the sense commonly expressed by this phrase [...]. Rather, they resemble incredibly talented imitators, capable of finding correlations and responding to new input as if to say 'this reminds me of...' and, in doing so, imitating the best strategies by 'distilling' them from a large number of examples".
He explains, moreover, how it is paradoxical that humans know every detail of the brain while having much less information related to the structure: "in other words, we know little about how the brain is wired (metaphorically speaking), and this is precisely the area of interest of AI researchers trying to build neural networks".
The 1960s: machine learning
The invention of machine learning dates to 1943, when Warren McCullock and Walter Pitts realized that the brain "despite being a soft, wet, gelatinous mass, the signaling in it is digital and, to be precise, binary" (Kaplan, 2017).
However, at the time of discovery, programmable computers were still unknown, so the two scholars did not think of using it in this way.
Later on, picking up their legacy and continuing the path, was psychologist Frank Rosenblatt with Perceptron, an electronic device capable of showing learning capabilities.
After an initial period of enthusiasm, however, these researches were interrupted, regaining relevance only in the '80s, when the first non-linear neural networks were developed.
But what is AI machine learning? Machine learning (or learning machine) is the process through which computers develop the ability to continuously learn from the input they are given (through the recognition of machine learning patterns) and to make predictions based on them, without being specifically programmed to do so.
It falls into the area of artificial intelligence as it efficiently automates the process of building analytical models and allows machines to adapt to new scenarios autonomously, through the use of machine learning algorithms.
ELIZA: one of the first chatbots in the history of AI
The 1960s, therefore, were characterized by a strong enthusiasm for AI; no wonder then that it was during these years that intelligent systems capable of solving (simple) "problems", from inferences to basic geometry problems, were created. An example of this new technology is ELIZA, a chatbot invented in 1966 by Weizenbaum that simulated a role of a Rogerian psychotherapist - she asked open questions with which she also answers - thus she diverts attention from herself to the user.
Although the intelligent program proceeded by analyzing and substituting very simple key words into ready-made sentences, ELIZA was crucial to demonstrate that the new technology (AI) could do things nobody could have imagined a few years before.
The 1970s: expert systems
Between the end of the '60s and the beginning of the '70s the world of AI Artificial Intelligence welcomed the expert systems. An expert system has the ability to artificially reproduce the performance of an expert person of a specific domain of knowledge or field of activity, without the need for a “human” expert on the matter of the problem.
But how do expert systems work? They are structured on three different technological levels:
1) knowledge base: includes all the information that the system uses to provide an answer to a problem.
This is the repository in which the information is stored and which allows the system to function;
2) inferential engine: in addition to general information, within the knowledge base there is also information relating to its operation, i.e. information that when a specific situation occurs, indicates how a specific rule will be followed.
3) user interface: it is thanks to the interface that the user can access and exploit the inferential engine. For this, the user can enter a question and consequently receive an answer related to the information entered in the system.
The limits of the 1970s
We have seen how between the end of the ‘60s and ‘70s we have witnessed the birth of expert systems, which, however, represent only a first generation that will be followed by others in the following decades.
These, in fact, exploited Boolean logic (true/false) and logical reasoning under conditions of certainty through a deterministic model (cause-effect) which, however, soon proved to be insufficient, especially since the human was able to prevail over them.
The premises for the ‘80s have been set and it will prove to be a decade full of innovations.
But we are still at the beginning of the complex history of artificial intelligence that will take us through all the major discoveries that led AI to be what we know today.Sources: Artificial Intelligence: What Everyone Needs to Know, 2016
Do you want to know what happens in the 80s? Expert systems and AI winters
If you missed the beginning of the artificial intelligence story: History of Artificial Intelligence: from Alan Turing to John McCarthy
Do you want to know more about Pigro? Contact us!