Bank regulators must eliminate biases in AI-based lending

The fundamental question that financial regulators face when considering the use of artificial intelligence and machine learning in banking is: Do the rewards outweigh the risks? This is doubly true when these tools are used to take out consumer loans.

Last month, the chairmen of the House Financial Services Committee, the working group on artificial intelligence, sent a letter to the heads of the five federal regulatory agencies, reminding them to keep up with the advancement of AI technology and ensure they are monitoring the industry for biased algorithms that could harm consumers and businesses.

It’s pretty clear that the shift to AI and machine learning is well underway in financial services. A 2020 survey of lenders found that 88% of them plan to increase their investment in AI in the coming years, specifically to assess credit risk. In a recent survey, credit union executives placed AI lending technology as a top investment priority in 2022.

Research is also underway. For example, FinRegLab just published a in-depth report on the data science behind these emerging tools and technologies. Because AI and machine learning models can process more data, they tend to assess risk better than humans alone, and therefore better than the older technologies they replace. This generally means less risk for financial institutions and better access to credit for consumers.

But, without adequate guardrails, AI and machine learning can cause problems. The two big issues are, first, whether AI models introduce safety and soundness risk into the financial system, and second, whether they exacerbate bias in lending.

Both problems can be solved if regulators put in place the right regulatory approaches, enabling AI and machine learning not only to overcome these risks, but also to make financial services fairer and more secure at the same time. . In our view, these approaches include both updated regulatory guidance as well as the adoption by regulators of AI-powered tools for use in monitoring and enforcement procedures. Lenders who choose to adopt AI models are also finding ways to put these safeguards in place for themselves.

As financial institutions increasingly use AI and machine learning in their operations, they must be prepared to use it to strengthen their compliance and legal functions as well. For example, if an AI software tool implements an underwriting model, the lender’s legal, compliance, and risk teams should use an AI tool, combined with human oversight, to assess fairness, stability and accuracy of the model.

Regulators are working hard on these challenges, having issued at least two requests for information to industry in the past year. They could dramatically increase the effectiveness of their own fair credit reviews by using artificial intelligence and machine learning models that can better detect unintentional bias patterns in loans, and by guiding the industry towards adoption. of a new generation of self-assessment tools. Regtech solutions are brought online to help financial institutions and regulators assess and mitigate the risks of this new technology while strengthening fair loan compliance.

What’s exciting about now is that there are win-win strategies that can make lending more inclusive and accurate at the same time.

As Rep. Maxine Waters, D-Calif., Chairman of the House Financial Services Committee, and Rep. Bill Foster, D-Ill., Chairman of the Artificial Intelligence Subcommittee, stated in their letter: “The Financial institutions using AI have the potential to play a role in offerings to communities that have been overlooked in the past, ”particularly affordable credit for low and moderate income communities of color. The result will be a safer and healthier banking system with greater access to credit for all.