Building Your AI Model The Right Way – Part III: A Guide For Boards – New Technology

Artificial Intelligence (AI) has become a focal point in the
corporate world. While technical teams grapple with the feasibility
of AI creation, and executives strategize on budget allocation for
its development and deployment, it is imperative for the board of
directors to deliberate on their role in this context. Beyond
considering the competitive advantage and return on investment,
board members must also contemplate the new obligations and risks
associated with AI adoption.

The development and utilization of AI in a company’s
offerings could entail several obligations. Initially, the company
must ascertain if the proposed AI is permissible in the
jurisdiction where it will be deployed or accessible by customers.
For example, the recently approved EU AI Act (see our previous post) prohibits certain AI use cases,
such as social credit scoring, certain instances of facial
recognition data scraping, and select biometric data processing.
Privacy laws also grant consumers rights that companies utilizing
AI will have to account for, such as the right to have their data
deleted, to be free from automated processing that may
significantly impact the consumer, and to be transparently informed
about the AI being used, including the workings of the model.
Compliance with these rights and other legal requirements
necessitates the incorporation of specific features into the AI
model.

In October 2023, President Biden issued Executive Order 14110
(EO) on the safe, secure, and trustworthy development and
deployment of AI. This EO mandates rigorous testing (including the
use of dedicated “red teams” to identify AI flaws and
vulnerabilities), enhanced security, compliance with forthcoming
standards set by the NIST, and the protection of civil rights by
avoiding AI-based discrimination. Companies in the federal supply
chain should pay particular attention to the obligations outlined
in this EO, as they will experience its impact almost immediately
through increased scrutiny in the procurement process as most
federal agencies hasten to implement these changes.

The adoption of this relatively new technology, along with its
associated laws, executive orders, and obligations, brings
corresponding risks. From an operational perspective, AI is a
complex technology, and its development or implementation can
introduce additional cybersecurity risks, either directly or
through a vendor. AI security risks are further exacerbated when
large volumes of personal data (used as training data for the AI)
may be exposed or compromised, leading to potential data breach
liability and subsequent reputational damage. Despite the limited
number of AI-specific laws currently in place, legal risk should be
a primary concern, whether arising indirectly from a privacy law or
directly from laws specifically addressing AI. These laws can
result in fines of up to 7% of annual revenue (or approximately $37
million if greater) and a ban in one or more countries, both of
which will negatively impact a company’s bottom line. In
certain cases, a company may also face lawsuits or class actions by
consumers affected by the AI if their injury is based on rights
under certain privacy laws.

Boards must carefully weigh these additional obligations and
risks against the benefits offered by AI, especially in an era
where board members are increasingly facing personal liability for
decisions that may not align with their duties to the company.
Staying informed is often the best defense for a board, and Brown
Rudnick is prepared to guide board members through the risks,
obligations, and best practices associated with using AI.

Use of new technologies, such as AI, does not excuse
organizations from their legal obligations, and hard-won consumer
protections are more important than ever in moments of
technological change. The Federal Government will enforce existing
consumer protection laws and principles and enact appropriate
safeguards against fraud, unintended bias, discrimination,
infringements on privacy, and other harms from AI.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

#Building #Model #Part #III #Guide #Boards #Technology

Leave a Reply

Your email address will not be published. Required fields are marked *