Artificial Intelligence Act – Finally There! – New Technology

This article is also available in the French language. To
read this article in French, please click here.

This is it!

Following extensive negotiations, the EU Parliament adopted the
final text of the AI Act by way of final vote at its plenary
sitting on 13 March 2024. With final procedural and linguistic
checks currently being carried out, it is now only a matter of
weeks before the Act is published in the Official Journal of the
European Union.

Once in force, the AI Act is anticipated to be the world’s
first comprehensive regulation of AI. While the AI Act will enter
into force 20 days after its publication in the Official Journal,
its implementation and enforcement will be subject to a phased
approach up until end of 2030.

The AI Act seeks to lay out a normative framework so that risks
associated with AI systems are managed and mitigated to build trust
in such systems and protect the fundamental rights of EU
citizens.

A number of things have changed in the text of the Act since our
last article (see here). Below is a summary of the main points
to note regarding the Act in its final version:

1. Definition of AI System

An “AI system” is now defined as “a machine-based
system designed to operate with varying levels of autonomy and that
may exhibit adaptiveness after deployment and that, for explicit or
implicit objectives, infers, from the input it receives, how to
generate outputs such as predictions, content, recommendations, or
decisions that can influence physical or virtual
environments.”

2. Classifying AI according to its risk

The Act retains its risk-based approach where:

a- AI systems presenting an unacceptable risk are prohibited
(e.g. social scoring systems; systems deploying manipulative or
deceptive techniques or exploiting vulnerabilities to distort an
individual’s behaviour; systems that infer emotions in
workplaces or educational institutions (except for medical or
safety reasons); systems that create or expand facial recognition
databases through untargeted scraping of images);

b- the bulk of the requirements under the Act applies to systems
that are considered “high risk”, as set out below;
and

c- limited, minimal or no risk AI systems (e.g. most video
games; spam filters that are enabled by AI) are subject to limited
obligations, essentially around transparency to ensure end users
understand that they are interacting with an AI system (e.g.
chatbots and deepfakes).

3. New criteria for high risk AI systems

A notable substantial change within the AI Act in its final
version is surrounding the classification of AI systems as
“high risk”.

To summarise, high risk AI systems are those:

a- used as a safety component or a product covered by EU laws
listed in Annex II to the Act and required to undergo a third-party
conformity assessment under those Annex II laws (e.g. medical
devices; machinery; toys); or

b-those systems falling within the use cases set out in Annex
III (e.g. certain uses of AI for remote biometric identification;
in a country’s critical infrastructure (gas; electricity; water
etc.) or in vehicles or medical devices).

An AI system will not be considered high risk if it does not
pose a significant risk of harm to the health, safety or
fundamental rights of natural persons, which includes by not
materially influencing the outcome of decision making. This
requires the fulfilment of one or more of the following
criteria:

a- the AI system is intended to perform a narrow procedural
task;

b- the AI system is intended to improve the result of a previously
completed human activity;

c- the AI system is intended to detect decision-making patterns or
deviations from prior decision-making patterns and is not meant to
replace or influence the previously completed human assessment
without proper human review; or

d- the AI system is intended to perform a preparatory task to an
assessment relevant for the purpose of the use cases listed in
Annex III.

Specifics to note:

  • An AI system shall always be considered high risk if it
    performs profiling of individuals (e.g. assessing or predicting an
    employee’s work performance or a person’s personal
    preferences, location or movements).

  • A provider of an AI system referred to in the Annex III list of
    high risk systems who considers that their system is not high risk
    will have to document their assessment before that system is placed
    on the market or put into service.

  • The provider of a high risk AI system referred to Annex III
    (or, where applicable, their authorised representative) shall
    register themselves and their system on an EU-wide database, with
    the exception of high risk AI systems used in critical
    infrastructure (which will need to be registered at national
    level).

  • The above registration requirement also applies where a
    provider of a high risk AI system has determined their system not
    to be high risk. A level of assessment and documenting of same will
    therefore be expected from providers of high risk systems.

4. General Purpose AI Models (GPAI models)

These are a new addition to the AI Act. A whole chapter is now
dedicated to them which aims to capture foundational models and
other technologies such as OpenAI’s Chat GPT and the likes.

A “general purpose AI model” means
“an AI model, including when trained with a large amount of
data using self-supervision at scale, that displays significant
generality and is capable to competently perform a wide range of
distinct tasks regardless of the way the model is placed on the
market and that can be integrated into a variety of downstream
systems or applications.” This does not cover AI models that
are used before release on the market for research, development and
prototyping activities.

A “general purpose AI system” is
defined as “an AI system which is based on a general purpose
AI model, that has the capability to serve a variety of purposes,
both for direct use as well as for integration in other AI
systems”.

All providers of GPAI models will have to comply with and adhere
to copyright laws and put specific documentation in place,
including technical documentation around training and testing
process and evaluation results, as well as information
documentation for the attention of other providers looking to
integrate a GPAI model into their own AI system.

In addition to outlining the obligations that will apply to
providers of GPAI models, this new chapter classifies GPAI models
according to their systemic impact.

A GPAI model will be classified as presenting a systemic risk if
it has high impact capabilities or is identified as such by the
European Commission. A GPAI model will be presumed to have high
impact capabilities if the total computational power used for its
training, measured in floating point operations (FLOPs), is greater
than 10^25. For now, this would capture the largest LLMs (Large
Language Models).

Providers of systemic GPAI models will have to notify the
Commission within 2 weeks. They will also need to perform model
evaluations, adversarial testing, track, document and report
serious incidents to the European AI Office and ensure adequate
cybersecurity protection. The text allows for adherence by
providers of GPAI models to codes of practice to demonstrate
compliance with their obligations.

Providers of GPAI models released under a free and open-source
license will be exempt from certain transparency-related
requirements. This exception will not apply if the models present a
systemic risk.

The European AI Office will be solely competent for the
supervision and enforcement in respect of providers of GPAI
models.

Providers of GPAI models will be subject to potential fines of
up to 3% of the total annual turnover or EUR 15M, whichever is
higher.

5. Timeline for compliance with the AI Act

The main deadline to note is 24 months after the AI Act enters
into force, but there are some exceptions:

  • 6 months after entry into force: The
    prohibitions on unacceptable risk AI practices will apply.

  • 9 months after entry into force: Codes of
    practice for GPAI models must be finalised.

  • 12 months after entry into force:

– The rules on GPAI models
will apply. With one exception being that providers of GPAI models
that have been placed on the market before 30 Jan 2024 shall take
the necessary steps to come into compliance by 24 months after
entry into force;

– Member States shall appoint their respective competent
authorities; and

– The EU Commission shall carry out an annual review and
assess the need for amendments to the list of prohibited and high
risk AI practices.

  • 24 months after entry into force:

– The obligations on high risk
AI systems listed in Annex III will apply;

– Member States shall have implemented rules on penalties,
including administrative fines; and

– The EU Commission shall carry out an annual review and
assess the need for amendments to the list of high risk AI
practices.

  • 36 months after entry into force:

– The obligations on high risk
AI systems used as a safety component or a product covered by EU
laws listed in Annex II to the Act and required to undergo a
third-party conformity assessment under those Annex II laws will
apply; and

– High-risk AI systems that were already on the market or in
use before the entering into force of the AI Act have to comply
only if they undergo significant changes in their designs. However,
those used by public authorities will have 4 years to comply.

6. Penalties

The penalties under the AI Act will essentially apply to three
distinct parties:

a- operators of AI systems;

b- providers of GPAI models; and

c- Union institutions, agencies, and bodies.

In respect of fines, the Act follows a three-tier approach:

a. A fine of up to €35,000,000 or, if the offender
is a company, up to 7% of its total worldwide annual
turnover
for the preceding financial year, whichever is
higher, for using prohibited/unacceptable risk systems (or placing
such systems on the market);

b. A fine of up to €15,000,000 or up to 3% of the
offender’s total worldwide annual turnover
, for
failure to comply with the requirements of high-risk systems. This
tier also applies to providers of GPAI models, in respect of which
the Commission is the authority competent to impose a fine; and

c. A fine of up to €7,500,000 or up to 1% of the
offender’s total worldwide annual turnover
for
supplying incorrect, incomplete of misleading information in
response to a request from notified bodies/national competent
authorities.

The Act also emphasizes the proportionality approach for SMEs
and start-ups, in respect of which each fine shall be up to the
lower of the percentages or amount in each of the above tiers.

In respect of Union institutions, agencies, and bodies, the
European Data Protection Supervisor can impose administrative fines
of up to €1,500,000.

7. What to do now?

As a first step, any organisation using a form of AI solution
should consider whether this solution falls within the category of
prohibited / unacceptable risk AI systems, especially given the
prohibition will be the first provisions of the Act to become
applicable. Prohibited / unacceptable risk AI systems include for
instance biometric categorisation, behavioural manipulation,
predictive policing and emotion recognition.

Organisation should also start identifying/inventorying the AI
systems they rely on and the risks such systems represent (low,
medium, high) as well as the role they will play under the Act, in
order familiarise themselves with the regime and requirements
applicable under the Act. The idea being to put some form of an AI
governance framework in place.

Proper staff training, governance and oversight should also be
put in place for monitoring specific matters relating to AI systems
such as privacy, data quality/integrity, transparency,
explainability and cybersecurity.

From a data protection perspective, a Data Protection Impact
Assessment will most likely be required in respect of AI systems
already being used and/or the introduction of any such systems
within the organisation.

Finally, the extent of the requirements under the Act means that
significant compliance costs will likely have to be incurred and
organisations should consider what budget will need to be allocated
for compliance purposes.

There is no doubt this piece of legislation will bring about its
own challenges and opportunities and require organisations to adapt
and thread carefully when using AI.

This article was prepared with the assistance of Zoe Dunne and
Kellie McDermott.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

#Artificial #Intelligence #Act #Finally #Technology

Leave a Reply

Your email address will not be published. Required fields are marked *