Interpretable Algorithms As A Potential Solution To CFPB’s Guidance On AI-Driven Credit Denials – Financial Services


To print this article, all you need is to be registered or login on Mondaq.com.

In September 2023, the Consumer Financial Protection Bureau
(CFPB) issued guidance on the use of artificial intelligence
in issuing credit denials, a prevalent practice among lenders. The CFPB explained
that when denying a credit application, lenders must provide a
substantive reason behind the denial. The guidance further explains
that the Equal Credit Opportunity Act (ECOA)—a law that outlaws discrimination
against credit applicants based on protected
classifications—and its implementing regulation, Regulation
B, prevents lenders and creditors from relying on a checklist of
reasons to deny a credit application if the reasons do not
“specifically and accurately” indicate the principal
reason(s) for the adverse action. These reasons, found in
Regulation B’s definition of “data to be collected and
reported,” are provided in the sample forms. According to the
CFPB, “creditors cannot state the reasons for adverse actions
by pointing to a broad bucket.” As one example, the CFPB
explained that “if a creditor decides to lower the limit on a
consumer’s credit line based on behavioral spending data, the
explanation would likely need to provide more details about the
specific negative behaviors that led to the reduction beyond a
general reason like ‘purchasing history.'” Doubling
down, the CFPB further commented that “creditors that simply
select the closest factors from the checklist of sample reasons are
not in compliance with the law if those reasons do not sufficiently
reflect the actual reason for the action taken.”

This guidance could pose a challenge for certain creditors
because the sophisticated algorithms typically used by
creditors—sometimes referred to as “black box”
algorithms—may not reveal the substantive reason that was the
basis for the denial. Black-box algorithms, like other types of
artificial intelligence, apply statistical transformations to
convert input data into an actionable output (in this case, a
credit denial). However, there is no visibility into the determinative factors that
led the algorithm to transform the input data into the output.
Thus, under a black-box credit algorithm, the credit provider only
has visibility into the array of input factors which led to the
decision: credit score, income, etc., but not necessarily the
determinative factor.

One potential consideration for creditors who rely on black-box
lending algorithms is to incorporate certain
algorithm-interpretation techniques into their business practices.
Indeed, CFPB commissioner Rohit Chopra issued a statement in June 2023 in which he commented
that “automated models can make bias harder to
eradicate…because the algorithms used cloak the biased inputs and
design in a false mantle of objectivity. … [I]nstitutions…have
to take steps to boost confidence in valuation estimates and
protect against data manipulation.” Examples of such steps
include Local Interpretable Model-agnostic Explanations (LIME) or
Shapley Additive exPlanations (SHAP) values to convey the rationale
behind credit denial decisions. LIME/SHAP are
interpretability-enhancing methods which harness statistics to
increase the transparency of black-box AI models. In effect,
these methods can rank and isolate the most determinative factors
in a credit decision, regardless of the original AI model. Both the
LIME and SHAP methods enhance interpretability and reveal the
determinative factors that lead to a credit denial decision, with
some limitations. Both methods perturb individual variables between
otherwise similar credit applications. For example, say Applicant 1
and Applicant 2 submit identical applications, except that
Applicant 1 has an income of $50,000 / year and Applicant 2 has an
income of $65,000 / year. Applicant 2 is accepted and Applicant 1
is rejected. LIME will
assign a probability (say, 45%) that income is responsible for
the behavior of the underlying black box credit model. Similarly,
SHAP values will be assigned to the input factors and
rank income among the potential factors by their determinative
impact on the output.

In sum, by implementing interpretability-enhancing methods such
as LIME and SHAP, creditors may be able to better identify and
disclose to consumers the rationale behind black-box algorithms,
and minimize their risk of an ECOA violation.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.

POPULAR ARTICLES ON: Finance and Banking from United States

Dos And Don’ts Of Interacting With Bank Regulators

Goodwin Procter LLP

Supervision is a daily fact of life for bank boards and management. Below, we offer strategies for how both board members and members of management can ensure that the supervisory process goes as smoothly as possible.

Regulation Round Up – February 2024

Proskauer Rose LLP

Welcome to the UK Regulation Round Up, a regular bulletin highlighting the latest developments in UK and EU financial services regulation.

#Interpretable #Algorithms #Potential #Solution #CFPBs #Guidance #AIDriven #Credit #Denials #Financial #Services

Leave a Reply

Your email address will not be published. Required fields are marked *