The Artificial Intelligence Act, the new GDPR ?

Thomas Di Martino
5 min readFeb 10, 2022
Background vector created by liuzishan — www.freepik.com, Network icons created by Dreamstale

Learning from the past: a tale of Data Regulations in Europe

In 2016, the publication of the General Data Protection Regulation (GDPR) shed light on a lot of issues related to Data transfer, storage or even auction. Some recent issues regarding the illegal status of using Google Analytics in Europe are starting to arise from this regulation, almost six years later.

AI startups, which represented $38B of total funding worldwide in 2021, also had their fair share of struggle from the newly voted regulation: a study led by researchers of Boston University & NYU revealed that 69% of surveyed startups created new positions within their company to handle GDPR-related issues, while 75% of them had to delete data to be in compliance with the regulation.

Considering how valuable money & data are to AI startups, the impact of GDPR on their activity cannot be taken lightly, especially when decisions have to be taken in a tight schedule. For instance, in the case of the Google Analytics debate, the CNIL, a french commission tasked with monitoring good practice regarding data privacy, gave an ultimatum of 1 month for companies to comply with the GDPR. For that, they need to switch from Google Analytics to another solution, where EU user data would not be stored in US servers.

Fast-forward to 2021 and potential new problems: the Artificial Intelligence Act (AIA)

When the GDPR was presented back in 2016, the last regulation dated back from 1995, when the internet was just starting to be a thing.

Regarding AI regulations, we find ourselves in a similar legal vacuum: AI applications have been in development with no regulation regarding the AI design whatsoever; only the GDPR really impacts their development, by putting constraints on data rather than on the final product.

Thus, the shock of the GDPR on AI startups might happen again with the AIA, with potentially more dramatic outcomes this time, as business models in complete disagreement with the proposed regulation could need to start from scratch.

Pyramid of risks, inspired by European Commission Website

AIA is a risk-driven regulation. Artificial Intelligence systems will fall under one of these 4 risk categories:

  • Minimal Risk: AI-enabled video games, spam filters… The vast majority of currently used AI systems are considered “minimal risk”.
  • Limited Risk: AI systems, such as chatbotss are considered as having limited risk, due to the potential confusion that may arise for some users, not knowing that they are indeed interacting with a machine.
  • High Risk: a LOT of AI software fall under this category, among which any software that could put the life and health of citizen at risk (e.g. transport), that can determine access to education (e.g. automatic exam scoring) or access to services (e.g. credit scoring denying citizens a loan), and many more (c.f. EU website)..
  • Unacceptable Risk: the highest of all risk concerns AI systems considered a “clear threat to the safety, livelihoods and rights of people” (European Commission).

To comply with the regulation, any software identified as part of one of these 4 AI Risk categories will be subject to specific obligations.

Regulations in place for each risk category

While the number of regulations for minimal and unacceptable risks are quite straight forward (respectively no regulation, and banned technology), systems falling under the limited risk and high risk have their set of regulations to follow, going from informing the user that the system they interact with is AI-driven to having a strict logging procedure with various rigourous standards regarding risk minimisation and documentation.

A typical example of high-risk systems, provided by the European Commission as an illustration, are remote biometric identification systems.

Technology photo created by rawpixel.com — www.freepik.com

The procedure for complying with these new high-risk regulations is well detailed in the following infographics, taken from the European Commission website.

Process for high risk AI systems (source: European Commission)

Once deployed on the market (Step 4), it is the authorities’ role to monitor the compliance of the AI system. As stated in the figure above, if substantial changes happen, the certification process will have to restart from Step 2. The exact definition of which changes are considered substantial is yet to be decided, however, this will likely be on a case-by-case basis.

The difference in impact between the GDPR and the AIA for AI startups

While data, for which the GDPR is designed, is merely a resource for most AI startups, algorithms, on the other hand, can be their core product. Seeing the development of your product being considered as illegal will eventually kill in the egg a lot of startup projects, as little to no option for compliance will be left on the table.

On a side note, most of tech (and even non-tech) companies have been able to comply with GDPR. Their versatility and capabilities to adapt to new situation makes this AIA yet just another obstacle to cross. The question now is: “How high will they have to jump this time ?

Sources

https://www.wired.com/story/google-analytics-europe-austria-privacy-shield/

Bessen, James & Impink, Stephen & Reichensperger, Lydia & Seamans, Robert. (2020). “GDPR and the Importance of Data to AI Startups”. SSRN Electronic Journal. doi: 10.2139/ssrn.3576714.

https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

--

--

Thomas Di Martino

As a French PhD student, I am passionate to whatever comes close to Artificial Intelligence and Earth Observation.