<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=747520772257475&amp;ev=PageView&amp;noscript=1">

Privacy protection and Artificial Intelligence: where are we?

How are the protection of personal data and privacy evolving being dealt with in an ever more AI-enabled world?

AI has been developing in recent years at an exponential rate, entering the everyday life of everyone. Many questions arise when you consider that machine learning, on which intelligent systems are based, is designed to process data, sometimes a huge amount of data. Where does this data come from? Are they treated ethically and do they respect personal, social, and individual rights? How can we ensure privacy protection in an AI-enabled world?

How AI works

Artificial Intelligence increases our ability to interpret reality. It can be applied in so many areas: to help the user in the purchasing process, to reach a place, make a translation, receive a medical diagnosis, make a reservation or a phone call, select business staff, and even play a game of chess or create a work of art.

To get these types of outputs, AI must have enough inputs (i.e., data) to allow it to draw precise conclusions: the best result will be achieved with a large amount of data at hand or with self-learning (machine learning).

Data "feed" Artificial Intelligence

The data collected by AI to perform its tasks may cover different domains and come from different sources. Specifically, they are:

  • data that allow direct identification (personal data and images);

  • data that allow indirect identification (e.g. tax code, IP address);

  • data revealing particular characteristics of individuals, such as ethnic origin, religious or philosophical beliefs, political opinions, health status, sexual life and orientation, genetic data, biometric data;

  • judicial data, i.e. data relating to criminal convictions or offences or related to security measures; and

  • data relating to electronic communications;

  • geolocation data.

All this data is necessary to "train" the AI, but often, as is obvious, they are sensitive personal information.

Artificial intelligence and the right to privacy

When we visit a website, we often consent superficially to the use of our data, perhaps because reading all the contents of the privacy policy would lose hours, perhaps days! 

When we agree, however, to give up part of the confidentiality of the data (right to privacy) that concerns us, we should realize what this means.

From each of our visits, clicks, and interactions, websites collect, also through AI, a mass of data that is then used to propose content related to us, or useful for us. This is why data can be said to generate value. 

The big online platforms (also called "superstar companies") are those that extract value from the huge amount of user data, providing in return profitable services, financed in part or entirely by advertising. 

However, we need to be aware of the mechanisms behind these systems, as often what determines "good" or "bad" AI is how it is used.

One of the negative forms of data use is represented, for example, by the manipulation of public opinion through the spread of fake news. Disinformation campaigns, which everyone has come across on the web at least once, are the most common (and most fought) risky use of user data. 

The news shown can be completely fake, or highlight only a part of the opinion, a single point of view, thus promoting polarization and the spread of social discrimination.

Particularly risky is the use of AI algorithms to automate decisions in the fields of health, finance, and justice, since based on a restricted dataset they could be influenced by biases in one direction or another.

Even in the political arena, AI's use of users' data can be a particularly thorny topic. As the Cambridge Analytica case testifies, the political debate can be influenced by the information we receive through the Internet and social networks, up to, in the worst-case scenario, a constantly monitored and surveilled society. 

For this reason, particular attention is required from the policymaker, who will have to implement new forms of protection to safeguard the sensitive data of the consumer and the citizen-user.

 ai_and_privacy_protection

 

Legislation on personal data protection

Since the issue of ensuring privacy protection for individuals has become increasingly relevant in the last decades, every country in Europe has already taken measures to tackle the problem. 

At the EU level, for example, there are the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS). The first one is an independent European body which contributes to the consistent application of data protection rules throughout the European Union and promotes cooperation between the EU’s data protection authorities. The EDPS is an independent supervisory authority set up to protect fundamental rights and freedoms in connection with the processing of personal data, and to ensure respect for individuals' dignity.

Given the need to manage, normalize and regularize the flow of personal data needed, among other things, to feed AI and to regulate the whole complex issue of privacy protection, a code on personal data protection was drafted.

European Community legislation took up the challenge of keeping up with rapid technological evolution, first through Directive 95/46, which then evolved into a full-fledged EU Regulation 2016/679, the so-called General Data Protection Regulation (GDPR)

While there is no clear and explicit reference to Artificial Intelligence, this Regulation is aimed at anyone who acquires and processes personal data from individuals, thus including those who make use of AI systems. In fact, with the GDPR, we are required to:

  1. define the purposes of the processing;

  2. inform about the use made of artificial intelligence;

  3. collect consent to data processing and profiling;

  4. determine the legal basis;

  5. assess the impact that the use of AI has on individuals;

  6. give a prospectus of how the technology works, to identify the criteria for its operation;

  7. intervene when data subjects' rights are violated;

  8. communicating and informing in cases of data breaches.

However, machine learning represents a new element in the already difficult relationship between artificial intelligence and personal data protection. In fact, it could come into conflict with Article 22 of the GDPR privacy 2018 and Recital 71 thereof, in which a general prohibition is expressed to subject the data subject (the one who surrenders data privacy) to automated decision-making processes, devoid of human intervention.

In this sense, an autonomous evolution and change in the conditions of processing by the AI would appear to be unsupported by the contractual basis and, therefore, unlawful.

However, the latest trends in artificial intelligence support the continuous interaction between humans and machines, since the latter would not (yet) be so intelligent as to operate in complete autonomy. Moreover, according to the accountability principle, it is the data controller who has to put in place "appropriate technical and organizational measures" to ensure (and demonstrate) compliance with the GDPR.

The Regulation also introduces the principle of "Privacy by design", according to which data protection must be implemented as early as the design of the technology or process through which data will be processed.

The recent study "The Impact of the General Data Protection Regulation on artificial intelligence", carried out in the context of the EPRS (European Parliamentary Research Service) in June 2020, is part of this approach.

We are therefore witnessing a reversal of perspective, in which the technology itself must, even before existing, comply with laws on sensitive data protection in order to respect the individual’s fundamental right to privacy.

This context also includes data pseudonymization, analyzed by the above-mentioned study as a mechanism to ensure the efficiency of AI: in all the information collected, the degree of "personalization" is substantially reduced, in part or entirely, as occurs with anonymization.

The EU AI Act and new scenarios

The Artificial Intelligence Act is the first European proposed law on AI. It assigns to the applications of AI three risk categories:

First level - applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned;

second level - high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements;

third level - applications not explicitly banned or listed as high-risk are largely left unregulated.

This proposal of Regulation was presented in its final version by the EU Commission on 21 April 2021 and will become law only when the European Council and Parliament agree on a common version of the text.

The goal is to find the right balance between technological development and personal data protection, between artificial intelligence and privacy, to promote increased transparency of AI systems, given real digital ethics.

(Source: Agenda Digitale – “Intelligenza artificiale, ecco come il GDPR regola la tecnologia di frontiera", 11/10/19, by R. Goretta)

 

Read also: Focus on Knowledge Management: how to automate knowledge management

 

Would you like to know more about Pigro and how it works? Don't hesitate to contact us!