<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=747520772257475&amp;ev=PageView&amp;noscript=1">

What is AI and how it can be integrated into the world of work

What is Artificial Intelligence? What are the differences between weak AI and strong AI? And, how it can be integrated into work processes? Is it really possible to talk about human replacement?

Over the last few years, Artificial Intelligence has emerged as a topic of discussion within companies, which emphasizes its ability to solve numerous problems.

Many are the voices and opinions that have spread around the concept of Artificial Intelligence, of which the opportunities have been highlighted, but also the risks and consequent fears.

Artificial Intelligence Definition

According to the EP definition, “AI is the ability of a machine to display human-like capabilities such as reasoning, learning, planning, and creativity."

It is a branch of Computer Science focused on creating "intelligent machines" with the goal of making them think and behave with human beings.

But the definitions of the concept of Artificial Intelligence and the perspectives of the field are varied and hence the need to organize them has emerged. 

The most comprehensive attempt to systematize these definitions can be traced to Stuart Russel and Peter Norvig (2003), who identified and arranged these descriptions into four categories, which include:

- systems that think like humans;

- systems that think rationally;

- systems that act like human beings;

- systems that act rationally.

According to the authors, these categories coincide with different phases of the historical evolution of Artificial Intelligence.

The turning point of Artificial Intelligence is identified in 1950 with the reflections of Alan Turing who hypothesized the possibility of programming a computer capable of behaving intelligently.

In order to evaluate the intelligence of a machine, Turing suggests the use of a test (known as the Turing Test), which uses human beings as a term of comparison (Russel and Norvig, 2003).

The computer is considered intelligent when the interrogator (human) is unable to distinguish whether or not the answers in front of it are given by a person (Russel and Norvig, 2003).

The expression “Artificial intelligence” will be used for the first time only in 1956 by John McCarty during a conference in Dartmouth.

Artificial Intelligence in philosophy: the theories of strong and weak AI

In defining Artificial Intelligence another aspect to consider concerns the distinction between strong and weak Artificial Intelligence.

This distinction has also expressed a part of philosophy, which questions the relationship between the human mind and the artificial one.

In general, based on shared definitions:

  • Weak AI simulates the functioning of some human cognitive functions and is related to the fulfillment of a very specific task (Russel and Norvig, 2003).

    Weak AI does not aim to "win" over human intelligence, rather the focus is on the action: acting as an intelligent subject, without it mattering if it really is.

    The presence of humans remains binding for the functioning of the machine, which is not able to think autonomously.  

  • Strong AI, on the contrary, emulates in a more complete way the functioning of the human mind, being autonomous and able to act like a human being (Russel and Norvig, 2003).

Artificial Intelligence movies: AI is scary

During the 20th century, Artificial Intelligence has also found a concrete representation in the products of the Cultural Industry, such as books and movies, which have tried to reflect on the complex relationship between humans and AI.

From an analysis of movie production, it is possible to underline how Artificial Intelligence has been represented in the most diverse roles and forms, starting from simple computers capable of understanding human language to androids capable of experiencing real feelings.

Among these cinematic products, it is possible to mention the 1968 movie Space Odyssey where Artificial Intelligence is represented with the character of HAL9000, a computer capable of understanding and interacting with humans. These behaviors are, therefore, clearly reflected in the categories identified by Russel and Norvig (2003).

In Space Odyssey, the human-AI conflict is partially addressed, stopping at the representation of Artificial Intelligence as support to humans, but ready to become an enemy the moment it is threatened.

The conflicted nature of this relationship emerges even more clearly in the 1973 movie Westworld.

Westworld brings to the stage a growing sense of unease and fear about intelligent machines, which will become over the course of the narrative a real threat to humans and their safety.

The conflict of the human-AI relationship is even more extreme in other movie products, such as Blade Runner (1982) and Terminator (1984).

These products, which are just a few of the AI movies about artificial intelligence and robots, make the rebellion and danger of intelligent machines the central focus of their storytelling. 

The future of Artificial Intelligence: can AI replace humans?

Over the years, myths and fake news related to AI and its relationship with humans have multiplied.

Among the reasons for this scenario, we can refer precisely to the spread of movies, centered on a conflictual relationship between humans and Artificial Intelligence, forgetting to explain the benefits of A.I. as well.

Even within the scientific and academic literature, several voices have supported the idea of a possible replacement of humans by Artificial Intelligence.

 

 

This perspective was opened by Simon in the text The shape of automation for men and management (1985), arguing that intelligent machines would be able to perform any task performed by humans.

This belief was not lost during the 20th century, but on the contrary remains strong today, reflected in the unfounded fear of people that they could be easily replaced by robots, able to do their work more efficiently and less expensively.

In the literature, however, it is also possible to consider a contrary and more conciliatory position, which assumes how humans and machines can collaborate in making a decision.

AI decision making: the analytical approach and the intuitive approach

In the decision-making process, can come into play the analytical approach and the intuitive one.

The analytical approach implies rational and logical processing of information and it is in this process that Artificial Intelligence can intervene.

The intuitive approach can be observed in making instinctive decisions, guided by one's "sixth sense", but also by creativity and imagination.

According to the literature, it is these personal gifts and components that are the true advantage of humans over AI, making them irreplaceable.

Within organizations, this means that Artificial Intelligence can be used, for example, in predictive analysis (which through data analysis allows predicting future scenarios).

Intelligent machines, in fact, use a statistical and data-driven approach, i.e. based on the analysis and interpretation of data.

Consequently, they can propose new ideas, or even identify relationships and correlations between different factors.

In general, this makes clear the existence of a relationship between Artificial Intelligence and Big Data, i.e. large amounts of data that can be stored and subsequently analyzed by machines.

Such intervention by Artificial Intelligence simplifies the work of humans but does not imply a total overpowering of Artificial Intelligence, lacking the experience and personal and intuitive judgment that characterizes the human being.

Based on this, the idea of a synergistic relationship between the human being and Artificial Intelligence is proposed in the literature.

In other words, humans and AI can collaborate, combining the speed of AI in gathering and analyzing information with the intuition of humans.

However, the human being will have to make the final decision, especially in situations of uncertainty, marked by scarce information, which prevents the proper intervention of Artificial Intelligence.

Additional Bibliography

Russel S., Norvig P. (2003), Artificial Intelligence. A modern approach, II Edition, Pearson Education, New Jersey.

 

Learn more: Artificial Intelligence Ethics: Rules and Principles for Making intelligent systems more responsible

 

Do you want to know more about Pigro? Contact us!