Skip to content

AI and Privacy: Navigating the Challenges in an AI-Enabled World

privacy-protection-and-ai

AI has been developing in recent years at an exponential rate, and is becoming more integrated into our everyday lives, it raises many important questions.

We must consider where the data that fuels these intelligent systems comes from and whether it is being handled ethically and with respect for personal, social, and individual rights.

Furthermore, we need to explore how we can ensure privacy protection in a world driven by AI.

How AI works

Artificial Intelligence increases our ability to interpret reality. It can be applied in so many areas: to help the user in the purchasing process, to reach a place, make a translation, receive a medical diagnosis, make a reservation or a phone call, select business staff, even play a game of chess, create a work of art, or generating any type of content.

To get these types of outputs, AI must have enough inputs (i.e., data) to allow it to draw precise conclusions: the best result will be achieved with a large amount of data at hand or with self-learning (machine learning).

Data "feed" Artificial Intelligence

The data collected by AI to perform its tasks may cover different domains and come from different sources. Specifically, they are:

  • data that allow direct identification (personal data and images);

  • data that allow indirect identification (e.g. tax code, IP address);

  • data revealing particular characteristics of individuals, such as ethnic origin, religious or philosophical beliefs, political opinions, health status, sexual life and orientation, genetic data, biometric data;

  • judicial data, i.e. data relating to criminal convictions or offences or related to security measures; and

  • data relating to electronic communications;

  • geolocation data.

All this data is necessary to "train" the AI, but often, as is obvious, they are sensitive personal information.

Preserving Privacy in the Age of Artificial Intelligence: A Visionary Scenario

As we delve deeper into the era of artificial intelligence, the need to protect privacy becomes increasingly crucial. In this visionary scenario, it is imperative that we find innovative ways to safeguard personal data while still harnessing the power of AI.

In today's digital landscape, we often consent to the use of our data without fully understanding the consequences. We click "agree" without realizing the extent to which our personal information is being collected and utilized. Websites, with the help of AI, gather an immense amount of data from our visits, clicks, and interactions. This data is then used to tailor content specifically to us or to provide personalized services. However, it is important to recognize that this data is sensitive and personal, and therefore must be handled with care.

The rise of "superstar companies," the online platforms that extract value from user data, has brought attention to the potential risks and disputes associated with the use of personal data by AI. Recent events, such as the Italian Supervisory Authority's decision against OpenAI, highlight the need for regulations and transparency in data processing. OpenAI's ChatGPT, a generative AI platform capable of simulating human conversations, has drawn attention for its ability to generate content on any topic. However, concerns were raised about the lack of notice given to users and the absence of filters to protect minors.

One of the most significant risks associated with AI and personal data is the manipulation of public opinion through the spread of fake news. Disinformation campaigns, often fueled by user data, can promote polarization and social discrimination. Furthermore, the use of AI algorithms in critical areas such as healthcare, finance, and justice raises concerns about biased decision-making. The infamous Cambridge Analytica case serves as a stark reminder of how AI's use of user data can influence political debates and potentially lead to a society under constant surveillance.

To address these challenges, legislation on personal data protection has been implemented at both national and European levels. The General Data Protection Regulation (GDPR) in the European Union sets out guidelines for the acquisition and processing of personal data, including the use of AI systems. It requires the definition of processing purposes, transparency in AI use, consent for data processing and profiling, assessment of the impact on individuals, disclosure of technology criteria, intervention in case of rights violations, and communication in data breach incidents.

However, the evolving field of machine learning introduces new complexities to the relationship between AI and personal data protection. Article 22 of the GDPR expresses a general prohibition on subjecting data subjects to automated decision-making processes without human intervention. This raises questions about the autonomy of AI systems and the need for accountability on the part of data controllers. The principle of "Privacy by design" emphasized in the GDPR underscores the importance of integrating data protection measures from the inception of AI technologies and processes.

In line with the growing concerns surrounding AI and privacy, the European Union has proposed the Artificial Intelligence Act. This groundbreaking legislation aims to protect privacy while addressing the risks associated with AI applications. It categorizes AI applications into three risk levels and introduces specific requirements for each category to ensure transparency, accountability, and the protection of individuals' fundamental rights.

In this visionary scenario, we must strike a balance between the potential of AI and the protection of privacy. By implementing robust regulations, fostering transparency, and promoting responsible AI practices, we can navigate the age of artificial intelligence while preserving the privacy and dignity of individuals.

The Intersection of Artificial Intelligence and Privacy Rights

When we browse a website, we often give our consent to the use of our data without fully considering the implications. However, it is important to understand the significance of sacrificing our privacy rights when we agree to share our confidential information.

Each time we visit a website, click on links, or interact with online platforms, our data is collected, often with the help of AI. This data is then used to provide us with personalized content or services that are relevant to our needs. This is why data is considered valuable.

The online platforms, also known as "superstar companies," extract value from the vast amount of user data they collect. In return, they offer profitable services, often funded through advertising.

Nevertheless, it is crucial for us to be aware of the underlying mechanisms behind these systems. The distinction between "good" and "bad" AI often lies in how it is utilized and the ethical considerations taken into account.

Risks and disputes related to the use of personal data by AI

In the last few days, there was a lot of talk about the decision of the Italian Supervisory Authority (SA) against OpenAI to limit the processing of data of Italian users.

The US company owns the ChatGPT generative AI platform, which has been making headlines for months now, and is one of the biggest tech innovations ready to revolutionise almost all industries.

Thanks to its ability to simulate human conversations, ChatGPT can answer users’ questions and create content on any topic, translate in any language and generate all kinds of text.

In the measure, the SA notes the lack of notice for the users whose data are collected by OpenAI, as well as the absence of a filter that limits the use of the service for minors.

Currently, a European task force has been created to work together to implement the European Data Protection Regulation and OpenAI has been provided with a list of requirements to be fulfilled on transparency, data subjects' rights and the legal basis.

A further example, this time of an extremely risky form of data use, is the manipulation of public opinion through the spread of fake news. Disinformation campaigns, which everyone has come across on the web at least once, are the most common (and most fought) risky use of user data. 

The news shown can be completely fake, or highlight only a part of the opinion, a single point of view, thus promoting polarization and the spread of social discrimination.

Particularly risky is the use of AI algorithms to automate decisions in the fields of health, finance, and justice, since based on a restricted dataset they could be influenced by biases in one direction or another.

Even in the political arena, AI's use of users' data can be a particularly thorny topic. As the Cambridge Analytica case testifies, the political debate can be influenced by the information we receive through the Internet and social networks, up to, in the worst-case scenario, a constantly monitored and surveilled society. 

For this reason, particular attention is required from the policymaker, who will have to implement new forms of protection to safeguard the sensitive data of the consumer and the citizen-user.

 ai_and_privacy_protection

 

Legislation on personal data protection

Since the issue of ensuring privacy protection for individuals has become increasingly relevant in the last decades, every country in Europe has already taken measures to tackle the problem. 

At the EU level, for example, there are the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS). The first one is an independent European body which contributes to the consistent application of data protection rules throughout the European Union and promotes cooperation between the EU’s data protection authorities. The EDPS is an independent supervisory authority set up to protect fundamental rights and freedoms in connection with the processing of personal data, and to ensure respect for individuals' dignity.

Given the need to manage, normalize and regularize the flow of personal data needed, among other things, to feed AI and to regulate the whole complex issue of privacy protection, a code on personal data protection was drafted.

European Community legislation took up the challenge of keeping up with rapid technological evolution, first through Directive 95/46, which evolved into a full-fledged EU Regulation 2016/679, the so-called General Data Protection Regulation (GDPR)

While there is no clear and explicit reference to Artificial Intelligence, this Regulation is aimed at anyone who acquires and processes personal data from individuals, thus including those who make use of AI systems. In fact, with the GDPR, we are required to:

  1. define the purposes of the processing;

  2. inform about the use made of artificial intelligence;

  3. collect consent to data processing and profiling;

  4. determine the legal basis;

  5. assess the impact that the use of AI has on individuals;

  6. give a prospectus of how the technology works, to identify the criteria for its operation;

  7. intervene when data subjects' rights are violated;

  8. communicating and informing in cases of data breaches.

However, machine learning represents a new element in the already difficult relationship between artificial intelligence and personal data protection. In fact, it could come into conflict with Article 22 of the GDPR Privacy 2018 and Recital 71 thereof, in which a general prohibition is expressed to subject the data subject (the one who surrenders data privacy) to automated decision-making processes, devoid of human intervention.

In this sense, an autonomous evolution and change in the conditions of processing by the AI would appear to be unsupported by the contractual basis and, therefore, unlawful.

However, the latest trends in artificial intelligence support the continuous interaction between humans and machines, since the latter would not (yet) be so intelligent as to operate in complete autonomy. Moreover, according to the accountability principle, it is the data controller who has to put in place "appropriate technical and organisational measures" to ensure (and demonstrate) compliance with the GDPR.

The Regulation also introduces the principle of "Privacy by design", according to which data protection must be implemented as early as the design of the technology or process through which data will be processed.

The recent study "The Impact of the General Data Protection Regulation on artificial intelligence", carried out in the context of the EPRS (European Parliamentary Research Service) in June 2020, is part of this approach.

We are therefore witnessing a reversal of perspective, in which the technology itself must, even before existing, comply with laws on sensitive data protection in order to respect the individual’s fundamental right to privacy.

This context also includes data pseudonymization, analyzed by the above-mentioned study as a mechanism to ensure the efficiency of AI: in all the information collected, the degree of "personalization" is substantially reduced, in part or entirely, as occurs with anonymization.

The EU AI Act and new scenarios

The Artificial Intelligence Act is the first European proposed law on privacy protection and AI. It assigns to the applications of AI three risk categories:

First level - applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned;

second level - high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements;

third level - applications not explicitly banned or listed as high-risk are largely left unregulated.

This proposal of Regulation was presented by the EU Commission on 21 April 2021 but in December 2022 the Council agreed with its counter-proposal by partially altering the content of the original one. Therefore to date, it is still under construction and will become law only when the European Council and Parliament agree on a common version of the text.

There are, in fact, aspects still to be clarified that would concern the potential overlap of the GDPR and the proposal for AI Regulation. Although the former focuses on data processing, while the AI Act covers the technology to carry out such processing, both laws focus on the purposes of personal data processing, the use of the AI system, and the by-design approach, and both require that the guidelines of the Regulation identify the risks for fundamental rights. However, the proposed AI Act explicitly aims to have a “human-centric” approach and shape an AI that is reliable and safe for individuals.

(Source: “Artificial Intelligence and the role of personal data protection, Ginevra Cerrina Feroni (Supervisory Authority)”, Feb. 14, 2023)

The goal is to find the right balance between technological development and personal data protection, between artificial intelligence and privacy, to promote increased transparency of AI systems, given real digital ethics.

(Source: Agenda Digitale – “Intelligenza artificiale, ecco come il GDPR regola la tecnologia di frontiera", 11/10/19, by R. Goretta)

 

Read also: Focus on Knowledge Management: how to automate knowledge management

 

Would you like to know more about Pigro and how it works? Don't hesitate to contact us!