<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=747520772257475&amp;ev=PageView&amp;noscript=1">

History of Artificial Intelligence: from Alan Turing to John McCarthy

How did the expression AI come about? We start in the 1950s to embark on a journey through the history of artificial intelligence.

For AI enthusiasts and experts, the Pigro team has decided to retrace, through a journey "in episodes", the main stages of the history of artificial intelligence. This is the first one!

Before AI had a name - Alan Turing 

To tell the story of "intelligent machines" it is not enough to go back to the invention of the term. We have to go even further back, to the experiments of mathematician Alan Turing.

"Can machines think?" is the opening line of the article Computing Machinery and Intelligence that Alan Turing wrote for Mind magazine in 1950. He tries to deepen the theme of what, only six years later, would be called Artificial Intelligence.

He does it using a test, known as the "Turing Test" or "Imitation game", invented to compare artificial intelligence and human intelligence.

But how does it work? The test consists of three participants: the interviewer, a man, and a woman. The interviewer, who cannot see the others, must try to find out their gender by asking questions, which they will answer using a teletype. 

Everything is further complicated by the roles assigned to the man and woman: one of the characters is tasked with lying while the other is tasked with being truthful.

Next, one of the participants, the man or the woman, is replaced by a computer without the knowledge of the interviewer, who in this second phase will have to guess whether he or she is talking to a human or a machine.

How do we evaluate if the Turing Test is passed? If the percentage of errors made by the interviewer in the game in which the machine participates is similar or lower than that of the game to identify the man and the woman, then the Turing Test is passed and the machine can be said to be intelligent.

In the book Artificial Intelligence: What Everybody Needs to Know Jerry Kaplan claims that Turing was the first to attribute the ability to think to computers.

He explains, in fact, how Turing with his test was not proposing a way to determine whether machines were intelligent or not, but "he was actually speculating that our common use of the term ‘thinking’ would eventually be extended to include, in an appropriate way, certain machines or programs of appropriate capacity."

Even today, the Turing test is considered one of the pillars of the birth of AI, to the point that there is still an annual award, the Loebner Prize, which uses the Turing machine to reward the bot with the behavior most similar to human thought. 

John McCarthy and Artificial Intelligence

When was AI officially born? The term "Artificial Intelligence" is first used by then-assistant professor of mathematics John McCarthy, moved by the need to differentiate this field of research from the already well-known cybernetics.

The need to create this split emerges particularly at a workshop on the subject, held in 1956 at Dartmouth (Hanover, NH) and organized by McCarthy together with three colleagues: Nathan Rochester, Claude Shannon, and Marvin Minsky.

McCarthy wanted a new neutral term that could collect and organize these disparate research efforts into a single field, focused on developing machines that could simulate every aspect of intelligence. 

A 17-page paper called the "Dartmouth Proposal" is presented in which, for the first time, the term "Artificial Intelligence" is used.

The paper discusses some topics that the organizers considered fundamental to the field of research, such as neural networks, computability theory, creativity, and natural language processing, motivating the need for the workshop with the goal of building "intelligent machines" capable of simulating every aspect of human intelligence.

According to McCarthy and colleagues, it would be enough to describe in detail any feature of human learning, and then give this information to a machine, built to simulate them.

Workshop results

It is not clear what the workshop, of which the final report has never even been produced, led to. But, as Kaplan defines it in his book Artificial Intelligence: "it is perhaps the first example of professionals in the field making overly optimistic predictions and promises about what goals would be achieved and how long it would take to achieve them." 

The Dartmouth workshop, however, had generated a lot of enthusiasm for technological evolution, and research and innovation in the field ensued. 

One of the most amazing ones was created by the American computer scientist Arthur Samuel, who in 1959 developed his "checkers player", a program designed to self-improve until it surpassed the creator's skills.

To increase its abilities it did something impossible for humans: playing against itself.

Samuel chooses the game of checkers because the rules are relatively simple, while the tactics to be used are complex, thus allowing to demonstrate how machines, following instructions provided by researchers, can simulate human decisions.

It develops a function capable of analyzing the position of the checkers at each instant of the game, trying to calculate the chances of victory for each side in the current position and acting accordingly. The variables taken into account were numerous, including the number of pieces per side, the number of checkers, and the distance of the 'eatable' pieces. 

Arthur Samuel, moreover, to give a name to his technological innovations invents the term "machine learning", identifying two distinct approaches: neural network and specific. 

The first, the neural network approach, leads to the development of general-purpose machine learning through a randomly connected switching network, following a learning routine based on reward and punishment (reinforcement learning).

The Specific approach, instead, as the name implies, leads to the development of machine learning machines only for specific tasks. A procedure that, only through supervision and reprogramming, reaches maximum efficiency from a computational point of view.

Artificial Intelligence as an Independent Research Field

The conception of the Turing test, first, and the coining of the term, later, made artificial intelligence recognized as an independent field of research, thus giving a new definition of technology.

Since that time, AI has grown exponentially until Alan Turing's predictions have come true: "The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

The 1950s were pivotal for AI. But this is just the beginning!

Sources: Artificial Intelligence: What Everyone Needs to Know, 2016