Artificial Intelligent ethics systems are programmed to process numerical and nonnumerical values and therefore do not understand ethical values. By formally setting criteria and principles, we try to put together AI and ethics.
Artificial Intelligence ethics is a topic that interests many, from philosophers to AI experts. AI ethical issues have recently been under discussion with the aim of making intelligent systems more ethical and accurate.
How Artificial Intelligence Works
Artificial Intelligence is programmed to perform a precise task in a fast and efficient way. It undoubtedly lightens human beings' workload and thus it becomes a trustful ally to them. Just think about the recent launch of GPT Chatbot, the system capable of generating content of any kind, defined by many people as a revolution in the field of AI.
Intelligent systems, for example, can analyze a large amount of information and documents, from which they extrapolate essential data or identify particular correlations.
Human beings can obviously obtain the same results, but with much more effort and time-wasting.
Artificial Intelligence is focused on specific algorithms, which are designed to ensure a specific task is carried out by the system.
As a result, intelligent machines are programmed to make a decision using a precise scheme based on clear and objective information.
There are therefore several questions in relation to ethical issues of Artificial Intelligence and how intelligent systems can be empowered that need to be answered.
Ethics: an elusive concept
Artificial Intelligence is programmed to rationally solve a problem, thus providing a logical response through processing available information.
However, sometimes we have to face challenging situations in our everyday life, situations that we may find difficult to judge unequivocally.
Some concepts are just not that simple to rationalize or define. For example, it is hard to assess what kind of features human behaviour should have in order to be considered morally correct.
Human beings often resolve uncertainties by relying on their gut feeling. However, intelligent software cannot do that as it needs explicit and objective metrics to be able to function. When faced with ambiguous situations, systems find it hard to provide the right answer and the solution they offer is wrong or distorted.
Artificial Intelligence is not objective
Due to its abilities and potentialities, Artificial Intelligence is often thought to be a flawless and neutral system.
This belief stems from misconceptions: the AI makes decisions based on reasonable decision processes - not emotional!- hence its choices must be objective.
Intelligent systems are often opaque, and it is therefore difficult to understand what makes them discriminate against specific groups instead of others.
AI bias: Discrimination and Racism
AI proves to be heavily biased depending on the type of information used to create the software.
Consequently, if the data are full of errors (perhaps related to the prejudices of the programmer), or to historical, cultural or social distortions, the system will make wrong decisions.
Recent events have indeed shown how Artificial Intelligence can take racist and discriminatory decisions.
A premise: intelligent systems function according to a statistical approach. Consequently, they read words and give them a positive or negative meaning depending on the evaluation these words retain socially.
Research has found that Artificial Intelligence tends to associate European names with favourable terms, while African names are associated with negative expressions.
As a result, when intelligent systems are used for recruitment sessions, it could happen that the Artificial Intelligence System (often used for a first skimming) ends up preferring applications that contain European names.
Artificial Intelligence has started to be used also in the banking sector for some time. Therefore, since the system follows the principle described above, it might be more likely to grant a loan or credit to a person with a European name.
AI and ethics: criteria to be met to design an Artificial Ethical Intelligence
According to Nick Bostrom, Professor of Philosophy at Oxford, for Artificial Intelligence to be ethical, it must meet specific criteria:
1. Algorithms must be transparent or quickly inspected. It would make it possible to understand why Artificial Intelligence took a certain decision before a request or a problem;
2. The algorithms must be predictable or accompanied by a clear explanation of their results;
3. Intelligent systems must be safe, that is, third parties must not manipulate them with malicious intentions;
4. Clearly define the reference of a particular intelligent system, to have a clear point of reference to which to turn in case of problems.
The stance of the European Commission regarding Artificial Intelligence ethical issues
The European Commission published an Ethics Code for Artificial Intelligence already in 2019.
The document underlines the need for an anthropocentric approach to the A.I. This means that intelligent systems are formally seen as systems that can improve the well-being of people.
Intelligent systems are identified as an essential technology to address current and global challenges, such as health and climate change.
In this regard, the European Commission has identified seven principles that artificial intelligence must respect to be ethical and reliable:
human action and surveillance: intelligent systems must support humans in their everyday life and must not reduce their autonomy;
robustness and safety: algorithms must be safe and reliable;
privacy and data governance: citizens must be aware of the shared data used. It is also necessary to avoid this information being used to infringe on it;
transparency: traceability of intelligent systems;
diversity, non-discrimination, and equity: intelligent systems must be accessible to all;
social and environmental welfare: AI should support sustainability and ecological responsibility;
accountability: ensuring accountability of AI and the results produced
In 2021, the EU has shown further concern for AI and its risks with the Artificial Intelligent Act.
Artificial Intelligence Ethics for companies
Also, IBM has drawn up a real guide for all companies that want to develop AI-based systems that respect ethical values.
In the 2021 report "AI Ethics in Action" it was found that 75% of executives recognize its importance and believe it is a differentiating factor in terms of competitiveness. Moreover, the responsibility for creating an ethical AI lies with all divisions, not just technical ones.
The IBM guide provides guidelines and ideas for the development of ethical artificial intelligence, which are divided into 3 macro-sections.
The first concerns Vision and Corporate Strategy, identifying what are the ethical practices to be implemented in the context of organizations.
Secondly, the Governance area is analysed, establishing the management approach to be adopted for the ethical development of AI.
And finally, the Implementation area defines the concrete methodologies to integrate artificial intelligence in the various tasks and train the staff.
Translating ethics into the algorithm: the Algor-Ethics is born
Artificial Intelligence is a new technology with great potential and rapid development, forced to clash with a society that conversely moves more slowly.
Intelligent systems are programmed to make their decisions using numerical values. Consequently, it is also necessary to transform ethics into something understandable for Artificial Intelligence.
Thus the strand of Algor-ethics is opening, based on the idea of translating values and principles into the binary language.
In doing so, in particular situations, Artificial Intelligence could falter and the final decision would depend on the decision of humans.
To stay updated on AI, read also: 10+1 unmissable Ted Talks on Artificial Intelligence.
Do you want more information about Pigro? Contact us!