Operation Of AI Instruments As A Challenge Waiting Just Behind The Corner – New Technology


To print this article, all you need is to be registered or login on Mondaq.com.

Nowadays and in the foreseeable future, one of the most
significant challenges facing politicians in many countries will be
adoption of proper policies related to operation of AI instruments
in their countries. To be politically and socially acceptable,
these policies should, on the one hand, enable research and
development in this area and efficient uptake of this technology in
general, but, on the other hand, they should regulate the use of AI
instruments to eliminate, or at least limit, the weight, scope and
scale of unfavorable side effects which they may bring about.

In the EU, the first attempt at this direction was
Commission’s White Paper (White Paper on Artificial
Intelligence – A European Approach to Excellence and Trust
(COM (2020) 65 final) as of February 19, 2020.) indicating general
framework for follow-up legislative steps embodying a politically
acceptable AI regulation at the EU level. At present, the
legislative process is at the final stage, with European Parliament
legislative resolution of 13 March 2024 on the proposal for a
regulation of the European Parliament and of the Council on laying
down harmonized rules on Artificial Intelligence (Artificial
Intelligence Act) and amending certain Union Legislative Acts
(COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)
[“AIR“].

Due to AIR’s broad and comprehensive approach to AI
legislation (it covers over 450 pages), this publication is limited
to key issues and selected aspects of the proposed regulation,
including IP rightsholders protection.

Basic purpose of the AIR

As its main purpose, the AIR laid down some rules which, one the
one hand, aim at harnessing dynamically developing AI
models/systems but, on the other, try also to make room for EU
companies to keep pace with their leading rivals in the AI
field.

As a key concept, Art. 3 (1) of the AIR defines “AI
system” as “a machine-based system designed to operate
with varying levels of autonomy and that may exhibit adaptiveness
after deployment and that, for explicit or implicit objectives,
infers, from the input it receives, how to generate outputs such as
predictions, content, recommendations, or decisions that can
influence physical or virtual environments”. The draft also
provides key definitions of the basic terms, such as ‘general
purpose AI model’, ‘high-impact capabilities’,
‘systemic risk’, etc.

Main areas of the AIR

Following the basic philosophy of the White Paper, the draft
makes use of the idea of a so-called ‘traffic light’
regulation. Basing on this idea, the draft differentiates the rules
according to potential risks of harm created in regulated areas by
particular models/systems.

As a result, we have: a) ‘red light’ area, where certain
model/systems practices are altogether banned, b) ‘yellow
light’ area, where some models and systems are not banned
altogether but their use is to varying degrees restricted and c)
‘green light’ area, with almost no restrictions at
place.

As to ‘red light’ area, the AIR laid down the list of
prohibited artificial intelligence practices which are especially
harmful from the fundamental rights perspective. The list includes:
a) subliminal or deceptive and manipulative techniques which are
meant to influence addressees behavior, b) other techniques which
‘exploit any of the vulnerabilities of a person or a specific
group of persons due to their age, disability or a specific social
or economic situation…’, c) real-time biometric
identification by law enforcement authorities in publicly
accessible spaces, esp. for ‘social scoring’ purposes (with
some exceptions), d) untargeted scraping of facial images for the
purpose of a creating or expanding facial recognition databases,
and some other potentially harmful activities.

As far as ‘yellow are’ area is concerned, we have
various requirements imposed on different models and systems,
depending on their potential to bring about certain detrimental
effects with some specific level of risk.

Main types of models/systems

Based on the above-mentioned approach, the AIR singles out
following main types of models/systems that are of special concern
in the AI field:

1) high-risk AI systems, whose use potentially
carries with it high risk of harm to the health, safety or
fundamental rights of natural persons and for which AI Act
specifies rules of classification and some operational
requirements. As main areas are listed: bio-metrics (esp. remote
bio metric recognition systems), critical infrastructure, education
and vocational training, employment, workers management and access
to self-employment, access to and enjoyment of essential private
services and essential public services and benefits, law
enforcement, in so far as their use is permitted under relevant
Union or national law, migration, asylums, border control
management, in so far as their use is permitted under relevant
Union or national law, administration of justice and democratic
processes.

Providers of such systems, in addition to the usual registration
requirements, are obliged to make sure that their models comply
with the EU harmonization legislation. Moreover, they should: a)
establish risk management system, b) conduct data governance, c)
keep technical documentation and design their systems for
record-keeping, d) design their systems to enable human oversight
and to assure their robustness, accuracy and cyber security, e)
provide instruction for use for deployers and to establish quality
management system.

2) General Purpose AI models (GPAI), which are
described as AI model(s), including when trained with a large
amount of data using self-supervision at scale, that display(s)
significant generality and is/are capable to competently perform a
wide range of distinct tasks regardless of the way the model(s)
is/are placed on the market and that can be integrated into a
variety of downstream systems or applications. A good example of
general purpose AI models are large generative AI models, which
make possible creation of content in various forms (text, audio,
images or video).

Obligations for providers of General-Purpose AI models

Providers of these models have the following specific
obligations:

a) ” … to draw up and keep up-to-date the technical
documentation of the model (including its training and testing
process and the results of its evaluation);

b) ” … to draw up and keep up-to-date and make available
information and documentation to providers of AI systems who intend
to integrate the general purpose AI model in their AI
system”.

This obligation is important bearing in mind that AI system
operator needs to know and understand the model specifics to safely
respect his/her own obligations. AIR confirms the need to respect
and protect intellectual property rights and confidential business
information or trade secrets in accordance with Union and national
law. The information and documentation, needs to be clear and
understandable, on one hand, and on the other hand, it shall not
violate IP.

c) ” … to put in place a policy to respect Union
copyright law
and in particular to identify and comply
with, including through state of the art technologies, a
reservation of rights expressed pursuant to Article 4(3) of
Directive (EU) 2019/790 and

d) ” … to draw up and make publicly available a
sufficiently detailed summary about the content used for training
of the general-purpose AI model, according to a template provided
by the AI Office.

The models in question, and esp. large generative AI models, for
their development and training usually need large amounts of text,
images, audio, video and other data input. Some parts of such
content used by various data mining techniques may be covered by
copyright, so making use of such texts and data requires copyright
holders’ consent.

However, the application of this general rule is more
complicated because Directive (EU) 2019/790, to which AIR makes a
direct reference, provides some liberalizing exceptions as far as
data mining consent is concerned. Despite these exceptions,
however, the copyright holders still retain the opt-out option to
reserve their rights. In the latter case, any use of text and data
for mining purposes requires their authorization. The
‘reservation of rights’ refers to rightsholders, who need
to expressly reserve the rights in an appropriate manner, such as
machine-readable means in the case of content made publicly
available online. Hence, in case rightsholder expressly reserved in
an appropriate manner the rights to opt-out, providers of
general-purpose AI models need to obtain an authorization from
rightsholders if they want to carry out text and data mining over
such works.

That is why all providers of GPAI models have the obligation
indicated in point 2c above, regardless of the jurisdiction in
which the copyright-relevant acts underpinning the training of
these general purpose AI models take place. What’s more, this
obligation is complemented by the one indicated in point 2d,
whereby providers of the models in question should make it easier
for copyright holders to find out whether their legally protected
texts or other data were used as an input in relevant mining
operations.

Given the possible complexity of the above requirements and
eventual financial costs involved, SMEs providers, including
start-ups, are offered simplified ways of compliance with these
obligations.

A similar rationale is easily seen in case of the rules
governing GPAI models released under free and open-source
licenses.
In principle, these models are assumed to offer
high levels of transparency and openness as long as their
“weights, the information on model architecture, and the
information on model usage, are made publicly available”.

The AIR stipulates that in case of free and open-source
licenses, the provider of such GPAI is not obliged to draw up and
keep up-to-date the technical documentation of the model, or to
draw up, keep up-to-date and make available information and
documentation to providers of AI systems who intend to integrate
the general-purpose AI model into their AI systems.

However, this exemption from obligation does not apply to GPAI
models with systemic risks.

The analysis of the AIR approach to these models’ regulation
is to be done in the upcoming publication of mine.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

#Operation #Instruments #Challenge #Waiting #Corner #Technology

Leave a Reply

Your email address will not be published. Required fields are marked *