At CBA Live, hosted by the Consumer Bankers Association, Dodd Frank Update sat in on a discussion with experts about the next frontier in fair lending: artificial intelligence (AI). The panel of experts discussed the ways AI is being used in fair lending and what regulators are doing in response to this new tool.
The panel featured Senior Vice President and Associate General Counsel for Public Affairs at Truist, Ebony Sunala Johnson; Vice President, Policy and Government Affairs for Zest AI, Yolanda McGill; Managing Partner at Mitchell Sandler LLC, Andrea Mitchell; and Co-leader, Consumer Financial Services Regulatory Practice at Troutman Pepper, Chris Willis.
The utilization of AI involves two major components, the model and the data, McGill explained. The model is the algorithm used to establish creditworthiness of potential borrowers. The data is all of the information being put into that model.
“The use of algorithms really isn't new,” McGill said. “Bias in algorithms also isn’t new. There are a few things that are really important here. First is that if you’re going to have a model built or are building a model, make sure you are taking it through fair lending review.”
What McGill means by “fair lending review” is taking it through a disparate treatment assessment to make sure there is an understanding of the variables going into the model and ensuring there are no protected class variables or recognizable proxies to those variables as data inputs. This review, according to McGill, should also include a disparate impact analysis to assess the degree to which there is a disparity based on membership of a protected class in their treatment through the model.
Regulators are approaching the usage of these models with extreme skepticism, Willis explained. There is a presumption by regulators that because it’s AI and machine learning (ML), “there must be something wrong with it.” Willis suggested a cinematic understanding of AI/ML has influenced the public dialogue in a way that sees anything AI/ML as being inherently bad.
“So, what we see with regulators when they’re doing exams is a lot of insistence on the model development process,” Willis said.
This investigation into the model development process will ask questions like:
- Did you look at the variables and remove the problem variables?
- Did you thoroughly document the business justification of the model?
- Did you “fair lending test” the model?
- What did you do to look for less discriminatory alternatives?
“I think what we’re seeing right now is that in financial institutions, to their proof, that they considered fair lending throughout the model development process,” Willis said. “And that’s really what we see [regulators] looking at so far. We have some regulators taking the position that the use of a particular training data set was biased, and therefore the model it was built off of was biased.”
Regulatory reaction to this new technology is still in its infancy, Mitchell explained. Regulators are taking a slow approach to addressing AI/ML in the financial services sector.
“I think it’s really illuminating that the first, and currently, only guidance we’ve gotten from the [Consumer Financial Protection Bureau] (CFPB) on this was the circular that came out saying that [lenders] have to issue adverse action notices that are correct,” Mitchell noted. “That’s really where they’re starting… from the premise that no matter how complex and sophisticated your models are, you still need to understand why you’re declining people and give the specific and accurate reasons why you decline.”
Lenders should be expecting greater levels of integration of regulations, Mitchell added. Regulators are considering AI/ML from myriad perspectives, so lenders should be mindful of things relating to AI beyond the basic usage of the tool.
The CFPB engaged in a hiring push last summer, Johnson noted, a likely indication the bureau is building up the capability for better understanding of AI/ML, and better support to supervisory exams and enforcement actions.
Mitchell posed the questions, “How are you managing your vendors who are probably building some of these models and using algorithms? How are you managing and constructing your model governance?”
“There are lots of aspects of this practice that are very similar to what’s happened before,” McGill added. “And so, to the extent that there is a regime or way of operating, that the institutions can bring to this, there’s no point in using an AI model as an excuse to not put your hands up and say, ‘Oh, this is this is brand new stuff.’ You still should embed your model governance in a compliance team that have been trained on fair lending.”
It’s important to remind regulators “early and often” that algorithms built with AI may not result in consistent outcomes among different borrower groups, Mitchell said. Even with the cleanest, most unbiased model, differential outcomes may occur. Creditors should be prepared to explain how their data and algorithms are appropriate and compliant.
“The idea that just because an algorithm or some model gives out something with returns that are different doesn’t mean an [Equal Credit Opportunity Act] violation, it doesn’t mean it shouldn't be used, it doesn’t mean that we need to go destroy the predictability of our attributes in the model.”