Skip to content

History of AI: from Alan Turing to John McCarthy, the first definition of Artificial Intelligence

history-of-ai-turing-mccarthy

For AI enthusiasts and experts, Pigro’s team has decided to retrace, through a journey "in episodes", the main stages of the history of artificial intelligence. This is the first one!

Before AI had a name - Alan Turing 

To tell the story of "intelligent systems" and explain the AI meaning it is not enough to go back to the invention of the term. We have to go even further back, to the experiments of mathematician Alan Turing.

"Can machines think?" is the opening line of the article Computing Machinery and Intelligence that Alan Turing wrote for Mind magazine in 1950. He tries to deepen the theme of what, only six years later, would be called Artificial Intelligence.

He does it using a test, known as the "Turing Test" or "Imitation game", invented to compare computer intelligence and human intelligence.

But how does it work? The test consists of three participants: the interviewer, a man, and a woman. The interviewer, who cannot see the others, must try to find out their gender by asking questions, which they will answer using a teletype.

Everything is further complicated by the roles assigned to the man and woman: one of the characters is tasked with lying while the other is tasked with being truthful.

Next, one of the participants, the man or the woman, is replaced by a computer without the knowledge of the interviewer, who in this second phase will have to guess whether he or she is talking to a human or a machine.

How do we evaluate if the Turing Test is passed? If the percentage of errors made by the interviewer in the game in which the machine participates is similar to or lower than that of the game to identify the man and the woman, then the Turing Test is passed and the machine can be said to be intelligent.

In the book Artificial Intelligence: What Everybody Needs to Know Jerry Kaplan claims that Turing was the first to attribute the ability to think to computers.

He explains how Turing with his test was not proposing a way to determine whether machines were intelligent or not, but "he was actually speculating that our common use of the term ‘thinking’ would eventually be extended to include, in an appropriate way, certain machines or programs of appropriate capacity."

Even today, the Turing test is considered one of the pillars of the birth of Artificial Intelligence technology, to the point that there is still an annual award, the Loebner Prize, which uses the Turing machine to reward the bot with the behaviour most similar to human thought. 

John McCarthy and Artificial Intelligence

When was AI officially born? The term "Artificial Intelligence" is first used by then-assistant professor of mathematics John McCarthy, moved by the need to differentiate this field of research from the already well-known cybernetics.

The need to create this split emerges particularly at a workshop on the subject, held in 1956 at Dartmouth (Hanover, NH) and organized by McCarthy together with three colleagues: Nathan Rochester, Claude Shannon, and Marvin Minsky.

McCarthy wanted a new neutral term that could collect and organize these disparate research efforts into a single field, focused on developing machines that could simulate every aspect of intelligence

A 17-page paper called the "Dartmouth Proposal" is presented in which, for the first time, the AI definition is used.

The paper discusses some topics that the organizers considered fundamental to the field of research, such as neural networks, computability theory, creativity, and natural language processing, motivating the need for the workshop to build "intelligent machines" capable of simulating every aspect of human intelligence.

According to McCarthy and colleagues, it would be enough to describe in detail any feature of human learning, and then give this information to a machine, built to simulate them.

Workshop results

It is not clear what the workshop, of which the final report has never even been produced, led to. But, as Kaplan defines Artificial Intelligence in his book: "it is perhaps the first example of professionals in the field making overly optimistic predictions and promises about what goals would be achieved and how long it would take to achieve them." 

The Dartmouth workshop, however, generated a lot of enthusiasm for technological evolution, and research and innovation in the field ensued. 

One of the most amazing ones was created by the American computer scientist Arthur Samuel, who in 1959 developed his "checkers player", a program designed to self-improve until it surpassed the creator's skills.

To increase its abilities it did something impossible for humans: playing against itself.

Samuel chooses the game of checkers because the rules are relatively simple, while the tactics to be used are complex, thus allowing him to demonstrate how machines, following instructions provided by researchers, can simulate human decisions.

It develops a function capable of analyzing the position of the checkers at each instant of the game, trying to calculate the chances of victory for each side in the current position and acting accordingly. The variables taken into account were numerous, including the number of pieces per side, the number of checkers, and the distance of the 'eatable' pieces

Arthur Samuel, moreover, to give a name to his technological innovations invents the term "machine learning", identifying two distinct approaches: neural network and specific. 

The first, the neural network approach, leads to the development of general-purpose machine learning through a randomly connected switching network, following a learning routine based on reward and punishment (reinforcement learning).

The Role of Neural Probabilistic Language Models in AI Development

Neural probabilistic language models have played a significant role in the development of artificial intelligence. Building upon the foundation laid by Alan Turing's groundbreaking work on computer intelligence, these models have allowed machines to simulate human thought and language processing.

The Turing test, which compares computer intelligence to human intelligence, is still considered a fundamental benchmark in the field of AI. Additionally, the term "Artificial Intelligence" was officially coined by John McCarthy in 1956, during a workshop that aimed to bring together various research efforts in the field.

The workshop emphasized the importance of neural networks, computability theory, creativity, and natural language processing in the development of intelligent machines.

This workshop, although not producing a final report, sparked excitement and advancement in AI research. One notable innovation that emerged from this period was Arthur Samuel's "checkers player", which demonstrated how machines could improve their skills through self-play. Samuel's work also led to the development of "machine learning" as a term to describe technological advancements in AI. Overall, the 1950s laid the foundation for the exponential growth of AI, as predicted by Alan Turing, and set the stage for further advancements in the decades to come.


The Specific approach, instead, as the name implies, leads to the development of machine learning machines only for specific tasks. A procedure that, only through supervision and reprogramming, reaches maximum efficiency from a computational point of view.

Artificial Intelligence as an Independent Research Field

The conception of the Turing test, first, and the coining of the term, later, made artificial intelligence recognized as an independent field of research, thus giving a new definition of technology.

Since that time, AI has grown exponentially until Alan Turing's predictions have come true: "The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century, the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted".

Sources: Artificial Intelligence: What Everyone Needs to Know, 2016

The 1950s were pivotal for AI. But this is just the beginning! Find out more on the 60s and 70s in the next article: History of AI, machine learning and expert systems.



Do you want more information on Pigro? Contact us!