Read a PDF of our statement here.

Independent Monitor Ends Oversight of Upstart Network Amid Impasse and Offers Guidance for Fair Lending Oversight in Financial Services; Parties Warn of Risk of AI-Driven Bias in Consumer Lending 

March 27, 2024 | WASHINGTON, D.C. — Today, the civil rights law firm Relman Colfax announced that it had concluded its independent oversight of Upstart (NASDAQ: UPST), releasing its fourth and final public report outlining the findings and recommendations from its years-long fair lending monitorship of the financial technology firm. While the monitor did not find evidence that variables in Upstart’s AI model operated as close proxies for protected classes and did not find pricing disparities, it did identify approval disparities for Black applicants.

Upstart adopted nearly all of the Monitor’s recommendations and made enhancements to its fair lending program. However, Upstart disagreed with “a key recommendation”—a less discriminatory alternative (LDA) model proposed by the Monitor—claiming that it would unacceptably compromise the accuracy of its models. Upstart instead offered its own LDA, claiming its approach could lead to substantial improvements for approval disparities, but the Monitor did not validate these claims. This led to an impasse among the parties.

A copy of the fourth and final report from Relman Colfax is available here:
https://www.relmanlaw.com/media/cases/1511_Upstart%20Final%20Report.pdf

As more companies experiment with and deploy alternative data and machine learning algorithms in underwriting, the work accomplished under this monitorship was intended to serve as a guidepost to align industry underwriting and fair lending testing practices with the goals of equity and financial inclusion. As such, the final report includes recommendations from the parties and the Monitor to lenders, regulators and policymakers on model testing, regulatory guidance for fair lending, and transparency in credit markets.

In response to the conclusion of Relman Colfax’s independent monitorship of Upstart, Ashok Chandran, Assistant Counsel at LDF, said:

“The use of artificial intelligence/machine learning models in the lending marketplace can result in troubling racial disparities. As companies look to provide expanded credit options for customers, it is more critical than ever that they proactively recognize and address the technological biases that only further exacerbate existing racial disparities. At the same time, the results of the independent oversight report of Upstart and the conclusion of this testing process makes clear the need for additional regulatory guidance.” 

Kat Welbeck, SBPC’s Director of Advocacy and Civil Rights Counsel said:

“While the Independent Monitorship provided an unprecedented view of the inner workings of one company’s algorithm, it should not be evaluated in isolation—every company that employs machine learning when underwriting consumer credit can learn a lesson from this monitorship.

The expansion of consumer credit options should not come at the expense of fair and equitable access to credit. The risks of further entrenching bias and systemic disparities cannot be overlooked in this pursuit. As more companies use alternative data and machine learning algorithms, regulators must use all tools available to prevent the risks that algorithmic bias poses to consumers.”

Nat Hoopes, Head of Government and Regulatory Affairs at Upstart, said:

“We share the advocates’ concern of the risk of bias in AI lending and are pleased that the Monitor confirmed our model does not lead to pricing disparities or include variables that implicate race, national origin, sex, or age. We believe that the measures incorporated into our fair lending program to remove bias achieve the same fairness objectives as those recommended by the Monitor. We’re grateful for the efforts and perspectives of LDF, SBPC, and Relman, and will continue our work to be a leader in fair lending.”

BACKGROUND

In 2020, SBPC published the result of an investigation presenting evidence of “educational redlining”—credit pricing or decisioning by lenders that rely on information about applicants’ degree attainment, program of study, or other information tied to their education. Certain education data are closely correlated with race and gender and, when used in underwriting or marketing, can lead lenders to violate federal fair lending laws. In the aftermath of this report, the Senate Banking Committee launched its own investigation, scrutinizing the practices of a dozen companies across the financial services marketplace.

Since 2020, the SBPC, LDF, Upstart, and Relman Colfax have worked collaboratively to establish a new, higher standard for fair lending transparency, testing, and less discriminatory alternative (LDA) exploration. The work centered on the fairness of approval rates and pricing outputs of algorithmic risk models, and the testing of individual variables and alternative model options. The process and associated reports explored new ground and established more modern expectations for compliance under the Equal Credit Opportunity Act (ECOA) when using artificial intelligence in consumer lending.

Further Reading

 

 

Shares