Algorithmic Transparency Requirements for Lending Platforms Using Automated Decision Systems
Now is the time to ensure lending models offered by private companies are fair and transparent. Access to affordable credit greatly impacts quality of life and can potentially impact housing choice. Over the past decade, algorithmic decision-making has increasingly impacted the lives of American consumers. But it is important to ensure all forms of algorithmic underwriting are open to review for fairness and transparency, as inequities may appear in either access to funding or in credit terms. A recent report released by the U.S. Treasury Department speaks to the need for more oversight in the FinTech market.
Challenge and Opportunity
The financial services sector, a historically non-technical industry, has recently and widely adopted automated platforms. Financial technology, known as “FinTech”, offers financial products and services directly to consumers by private companies or in partnership with banks and credit unions. These platforms use algorithms that are non-transparent but directly affect Americans’ ability to obtain affordable financing. Financial institutions (FIs) and mortgage brokers use predictive analytics and artificial intelligence to evaluate candidates for mortgage products, small business loans, and unsecured consumer products. Some lenders underwrite personal loans such as auto loans, personal unsecured loans, credit cards, and lines of credit with artificial intelligence. Although loans that are not government-securitized receive less scrutiny, access to credit for personal purposes impacts the debt-to-income ratios and credit scores necessary to qualify for homeownership or the global cash flow of a small business owner. Historic Home Mortgage Disclosure Act (HMDA) data and studies on small business lending demonstrate that disparate access to mortgages and small business loans occurs. This scenario will not be improved through unaudited decision automation variables, which can create feedback loops that hold the potential to scale inequities.
Forms of discrimination appear in credit approval software and can hinder access to housing. Lorena Rodriguez writes extensively about the current effect of technology on lending laws regulated by the Fair Housing Act of 1968, pointing out that algorithms have incorporated alternative credit scoring models into their decision trees. These newly selected variables have no place in determining someone’s creditworthiness. Inputs include factors like social media activity, retail spending activity, bank account balances, college of attendance, or retail spending habits.
Traditional credit scoring models, although cumbersome, are understandable to the typical consumer who takes the time to understand how to impact their credit score. However, unlike credit scoring models, lending platforms can input a data variable with no requirement to disclose the models that impact decisioning. In other words, a consumer may never understand why their loan was approved or denied, because models are not disclosed. At the same time, it may be unclear which consumers are being solicited for financing opportunities, and lenders may target financially vulnerable consumers for profitable but predatory loans.
Transparency around lending decision models is more necessary now than ever. The COVID-19 pandemic created financial hardship for millions of Americans. The Federal Reserve Bank of New York recently reported all-time highs in American household debt. In a rising interest rate environment, affordable and fair credit access will become even more critical to help households stabilize. Although artificial intelligence has been in use for decades, the general public is only recently beginning to realize the ethical impacts of its uses on daily life. Researchers have noted algorithmic decision-making has bias baked in, which has the potential to exacerbate racial wealth gaps and resegregate communities by race and class. While various agencies—such as the Consumer Financial Protection Bureau (CFPB), Federal Trade Commission (FTC), Financial Crimes Enforcement Network, Securities and Exchange Commission, and state regulators—have some level of authority over FinTech companies, there are oversight gaps. Although FinTechs are subject to fair lending laws, not enough is known about disparate impact or treatment, and regulation of digital financial service providers is still evolving. Modernization of policy and regulation is necessary to keep up with the current digital environment, but new legislation can address gaps in the market that existing policies may not cover.
Plan of Action
Three principles should guide policy implementation around FinTech: (1) research, (2) enforcement, (3) incentives. These principles balance oversight and transparency while encouraging responsible innovation by community development financial institutions (CDFIs) and charitable lenders that may lead to greater access to affordable credit. Interagency cooperation and the development of a new oversight body is critical because FinTech introduces complexity due to technical, trade, and financial services overlap.
Recommendation 1: Research. The FTC should commission a comprehensive, independent research study to understand the scope and impact of disparate treatment in FinTech lending.
To ensure equity, the study should be jointly conducted by a minimum of six research universities, of which at least two must be Historically Black Colleges and Universities, and should be designed to understand the scope and impact of fintech lending. A $3.5 million appropriation will ensure a well-designed, multiyear study. A strong understanding of the landscape of FinTech and its potential for disparate impact is necessary. Many consumers are not adequately equipped to articulate their challenges, except through complaints to agencies such as the Office of the Comptroller of Currency (OCC) and the CFPB. Even in these cases, the burden of responsibility is on the individual to be aware of channels of appeal. Anecdotal evidence suggests BIPOC borrowers and low-to-moderate income (LMI) consumers may be the target of predatory loans. For example, an LMI zip code may be targeted with FinTech ads, while product terms may be at a higher interest rate. Feedback loops in algorithms will continue to identify marginalized communities as higher risk. A consumer with lesser means who also receives a comparative triple-interest rate will remain financially vulnerable due to extractive conditions.
Recommendation 2: Enforcement. A suite of enforcement mechanisms should be implemented.
- FinTechs engaged in mortgage lending should be subject to Home Mortgage Disclosure Act (HMDA) reporting on lending activity and Community Reinvestment Act (CRA) examination. When a bank utilizes a FinTech1, a vendor CRA assessment should be incorporated into the bank’s own examination process. Credit unions should also be required to produce FinTech vendor CRA exams during their examination process. CRA and HMDA requirements would encourage FinTech to make sure they are lending broadly.
- Congress should codify’ FinTechs’ role as the “true lender” whenever a FinTech’s underwriting model is used by an FI partner to clarify FinTech responsibility to all applicable state, local, and federal interest caps, fair lending laws, etc., as well as liability when they do not meet existing standards. Federal regulatory agency guidelines must also be updated to clarify the bank or credit union’s Fintech partner’s shared responsibility when a FinTech model for underwriting violates UDAAP or fair lending guidelines.
- A previously proposed OCC FinTech charter should be adopted but made optional. However, when a FinTech chooses to adopt the OCC charter, the charter should give FinTechs interstate privileges covered under the Reigle-Neal Interstate Banking and Branch Efficiency Act of 1994. This provision should also require FinTechs to fulfill state licensing requirements in each state in which they operate, eliminating their current ability to bypass licensing by partnering with regulated FIs.
- Companies engaged in any financing activity or providing automated lending software to regulated FIs must be required to disclose decision models to the FI’s examiner upon request. FinTech data disclosure must not be limited to federally secured loans such as small business or mortgage loans but include secured and unsecured loan products made to consumers such as auto, personal, and small dollar loans. When consumers obtain a predatory product in these categories, the loans can have a severe impact on debt-to-income/back-end ratios and credit scores of borrowers, preventing them from obtaining homeownership or causing them to receive less favorable terms.
Recommendation 3: Incentives. Develop an ethical FinTech certification that denotes a FinTech as responsible lender, such as modeled by the U.S. Treasury’s CDFI certification.
The certification can sit with the U.S. Treasury and should create incentives for FinTechs demonstrated to be responsible lenders in forms such as grant funding, procurement opportunities, or tax credits. To create this certification, FI regulatory agencies, with input from the FTC and National Telecommunications and Information Administration, should jointly develop an interagency menu of guidelines that dictate acceptable parameters for what criteria may be input into an automated decision model for consumer lending. Guidelines should also dictate what may not be used in a lending model (example: college of attendance). Exceptions to guidelines must be documented, reviewed, and approved by the oversight body after being determined to be a legitimate business necessity.
Conclusion
Now is the time to provide policy guidance that will prevent disparate impact and harm to minority, BIPOC, and other traditionally marginalized communities as a result of algorithmically informed biased lending practices.
Yes, but the CFPB’s general authority to do so is regularly challenged as a result of its independent structure. It is not clear if its authority extends to all forms of algorithmic harm, as its stated authority to regulate FinTech consumer lending is limited to mortgage and payday lending. UDAAP oversight is also less clear, as it pertains to nonregulated lenders. Additionally, the CFPB has the authority to regulate institutions over $10 billion. Many FinTechs operate below this threshold, leaving oversight gaps. Fair lending guidance through financial technology must be codified apart from the CFPB, although some oversight may continue to rest with the CFPB.
Precedent is currently being set for regulation of small business lending data through the CFPB’s enforcement of Section 1071 of the Dodd-Frank Act. Regulation will require financial disclosure of small business lending data. Other government programs, such as the CDFI fund, currently require transaction-level reporting for lending data attached to federal funding. Over time, private company vendors are likely to develop tools to support reporting requirements around lending. Data collection can also be incentivized through mechanisms like certifications or tax credits for responsible lenders that are willing to submit data.
The OCC has proposed a charter for FinTechs that would subject them to regulatory oversight (see policy recommendation). Other FI regulators have adopted various versions of FinTech oversight. Oversight for FinTech-insured depository partnerships should remain with a primary regulatory authority for the depository with support from overarching interagency guidance.
A new regulatory body with enforcement authority and congressional appropriations would be ideal, since FinTech is a unique form of lending that touches issues that impact consumer lending, regulation of private business, and data privacy and security.
This argument is often used by payday lenders that offer products with egregious, predatory interest rates. Not all forms of access to credit are responsible forms of credit. Unless a FinTech operates as a charitable lender, its goal is profit maximization—which does not align well with consumer protection. In fact, research indicates financial inclusion promises in FinTech fall short.
Many private lenders are regulated: Payday lenders are regulated by the CFPB once they reach a certain threshold. Pawn shops and mortgage brokers are subject to state departments for financial regulation. FinTechs also have the unique potential to have a different degree of harm because their techniques of automation and algorithmic evaluation allow for scalability and can create reinforcing feedback loops of disparate impact.
The incoming administration must act to address bias in medical technology at the development, testing and regulation, and market-deployment and evaluation phases.
The incoming administration should work towards encouraging state health departments to develop clear and well-communicated data storage standards for newborn screening samples.
Proposed bills advance research ecosystems, economic development, and education access and move now to the U.S. House of Representatives for a vote
NIST’s guidance on “Managing Misuse Risk for Dual-Use Foundation Models” represents a significant step forward in establishing robust practices for mitigating catastrophic risks associated with advanced AI systems.