Skip to content

The EU Regulation on Artificial Intelligence: Balancing Innovation and Ethics

On June 14, 2023, the European Parliament approved the Artificial Intelligence Act (AI Act or AIA). This marks the world's first regulation on Artificial Intelligence, a significant step towards attempting to regulate the use of one of the most debated technologies ever.
Let's see what the AIA entails when it will come into force, and what its objectives are.

 

Why a EU Regulation on Artificial Intelligence

Artificial Intelligence is a highly debated topic. Some fear that it may lead to the obsolescence of human skills, while others believe it can only enhance and simplify people's lives.
Analyzing the implications of a new technology is essential to use it consciously and effectively. 

Just like any other technology, there comes a time to establish clear and institutional principles that allow for balancing innovation with ethics.
AI is no exception. That's why the European Union, ahead of all other countries in the world, has decided to approve a regulation to ensure responsible development and use of this technology.

It's called the Artificial Intelligence Act, and it was approved by the European Parliament last June. What are the primary objectives of this regulation? What are the key points? What will be the impact on the AI industry?

Let's delve into it.


The Foundations of the Artificial Intelligence Act (AIA)

The EU Regulation on Artificial Intelligence aims to address the challenges and opportunities of AI in Europe.

This regulation is based on three fundamental pillars:

  1. Trust and Reliability: The Regulation sets requirements to ensure that AI systems are safe and reliable. For example, there is an obligation to constantly monitor the performance of systems, prevent their deterioration, and ensure that they are designed to be transparent and understandable.
  2. Ethics and Respect for Fundamental Rights: The AIA places special emphasis on AI ethics. AI systems must adhere to human rights principles, including equality, non-discrimination, and privacy respect. Similarly, the Regulation prohibits AI systems that could negatively affect human dignity in any way.
  3. Supervision and Regulation: The European Union will establish a new oversight body to monitor the application of the Regulation and ensure compliance, with severe penalties for companies that violate these rules (up to 6% of a company's revenue). 

The Artificial Intelligence Act is likely to come into effect between 2024 and 2025.


Ethics and Respect for Fundamental Rights

 

Risk Assessment in AI Systems

The AIA establishes four levels of risk to regulate AI systems, with corresponding obligations and prohibitions for companies developing and using this technology.
According to the Regulation, AI is considered an "unacceptable risk" in light of its potential implications for humans.

Specifically, unacceptable risk refers to cognitive behavioral manipulation of individuals or specific vulnerable groups, social scoring, and real-time and remote biometric identification, such as facial recognition.

This means that the Regulation will prohibit a range of AI-based systems that:

  • Use subliminal, manipulative, or deceptive techniques to distort behavior.
  • Exploit vulnerabilities of individuals or specific groups.
  • Use biometric categorization based on sensitive attributes or characteristics.
  • Engage in social scoring or are used to assess reliability.
  • Create facial recognition databases.
  • Detect emotions in law enforcement or workplace and education settings.

According to the AIA, Artificial Intelligence represents a "high risk" when it compromises security and fundamental rights. 
This category includes transportation infrastructure, medical products, creditworthiness calculation systems, and the legal system.
Finally, there is a "low risk" category for systems that pose no threat to basic ethical principles, such as video games or spam filters.

Impact of the Regulation on the AI Industry

The Artificial Intelligence Act will undoubtedly have a significant impact on the AI industry in Europe and beyond. 
One of the main implications will undoubtedly be that of responsibility: companies investing in AI must do so consciously and ethically, investing in research and development with a responsible mindset, and promoting innovation that respects human rights and diversity.


Artificial Intelligence must not perpetuate biases or discrimination (which is already happening in some cases). Companies must combat inequality phenomena in the design of AI systems and consider the social and ethical implications of their technologies.
In this context, there will be a significant effort in terms of employee training to ensure understanding and compliance with the Regulation, and substantial investments will be needed for ethical impact assessment to identify and mitigate risks.


In conclusion, the EU Regulation on Artificial Intelligence represents a significant step forward in the technological world. For the first time, an attempt has been made to ensure that AI is developed and used responsibly and ethically. 
Balancing innovation with ethics is crucial to building a future where Artificial Intelligence positively contributes to society without compromising fundamental rights.
Companies and AI developers should embrace this challenge as an opportunity to demonstrate the value of technology responsibly and sustainably.

Read also: 

 

Want more information about Pigro? Contact us!