Regulatory Guidance · April 2026
AI governance without the jargon.
There is no single AI banking law. What exists is a set of extensions: five existing regulatory frameworks that have been stretched to cover artificial intelligence. The US Government Accountability Office confirmed this in its May 2025 report (GAO-25-107197): no comprehensive AI-specific banking regulation exists yet. What exists is a patchwork, and that patchwork is what governs your institution today. Here is what each piece actually means for the people doing the work.
The five frameworks, in plain language.
Most regulatory guidance is written for lawyers and examiners, not for the compliance officer who needs to know whether their institution can use Microsoft Copilot to help draft a loan memo, or the loan officer who wants to know if running a borrower summary through an AI tool creates ECOA exposure. The translations below are practical — they are not legal advice, and they do not substitute for your institution’s counsel. But they are a starting point for having the right conversations.
SR 11-7 — Model Risk Management.
SR 11-7 was published by the Federal Reserve and OCC in 2011 as guidance on managing the risk of quantitative models used in credit underwriting, risk scoring, and investment analysis. The guidance requires that models be validated, documented, and monitored through their full lifecycle.
When AI arrived, regulators applied SR 11-7 by extension: any AI system that produces outputs used in credit, risk, or compliance decisions is a “model” under this framework. The SR 11-7 requirements follow.
What this means for your daily work: If an AI tool touches a credit decision — at any point in the process, even just to summarize a loan file — you must be able to explain what the tool does, what its limitations are, and how its outputs were validated. “We used ChatGPT to help review the application” is not documentation. A record of what was reviewed, what the AI produced, and how it was checked before being used in the decision — that is documentation.
SR 11-7 also requires conceptual soundness andtransparency. Black-box AI outputs that cannot be traced to specific input factors fail this test for credit use cases. The model must be explainable — a word the AIEOG Lexicon defines precisely as “the capacity of an AI system to provide human-understandable reasons for its outputs.”
TPRM — Third-Party Risk Management.
Interagency TPRM guidance requires that institutions assess and manage risk from outsourced services and third-party vendors. The framework was extended to AI when it became apparent that most community banks would deploy AI primarily through vendor tools — core banking AI features, Microsoft Copilot, document processing platforms, loan origination software with AI scoring — rather than building models in-house.
What this means for your daily work: Every AI tool from a vendor requires a TPRM assessment before deployment. This includes AI features that activate automatically in tools already licensed by your institution. It includes the “free tier” tools staff are using without IT’s knowledge. It includes the AI-assisted document feature your core banking vendor quietly enabled in last quarter’s update.
The practical test is simple: if an AI tool processes any institutional data — even non-customer operational data — IT and compliance need to have reviewed it before widespread staff use begins. The review does not need to take months. It needs to happen.
ECOA / Reg B — Equal Credit Opportunity.
ECOA and its implementing regulation, Reg B, prohibit discrimination in credit decisions and require that applicants who are denied credit receive specific, written reasons for the denial. This was designed for human underwriters. Applied to AI, it creates a strict explainability requirement.
What this means for your daily work: If an AI system influences a credit decision — whether it scores an applicant, flags a risk, or suggests terms — the adverse action notice must provide reasons that trace to specific, explainable factors from the application. “AI score too low” is not a legally adequate reason. “Insufficient collateral relative to loan amount” or “income-to-debt ratio exceeds policy threshold” — those are legally adequate reasons, and they must be traceable to actual input variables, not just AI outputs.
The CFPB has been explicit about this. Algorithmic lending decisions are subject to the same adverse action requirements as manual ones. The institution, not the AI vendor, is responsible for satisfying them.
BSA / AML — Bank Secrecy Act.
BSA and its Anti-Money Laundering requirements exist to ensure that financial institutions maintain records and file reports that support law enforcement in identifying financial crime. Suspicious Activity Reports are the primary instrument. AI is increasingly used in transaction monitoring to flag potential SAR candidates.
What this means for your daily work: AI-generated SAR recommendations require human review before filing. This is not optional. The legal accountability for a filed SAR rests with the institution and the staff member who approves it — not with the AI that flagged the transaction. An examiner reviewing your SAR files will ask how the determination was made. “The AI said so” is not a satisfactory answer.
AI transaction monitoring systems must also meet the same documentation and auditability standards as manual monitoring processes. Model performance must be validated. If your institution is using an AI-enabled monitoring system, the system itself is a model under SR 11-7 and requires appropriate documentation.
The AIEOG AI Lexicon — the new vocabulary.
In February 2026, the US Treasury, FBIIC, and FSSCC published the AIEOG AI Lexicon — the first cross-regulator glossary for AI in financial services. It is not a regulation. But it is the vocabulary regulators will use in examinations. Institutions whose AI policies use different terminology for the same concepts create unnecessary examination risk.
Six terms from the AIEOG Lexicon are directly relevant to daily operations. Hallucination: an AI output that is factually incorrect, fabricated, or misleading, presented with apparent confidence. AI Governance: the policies, processes, and structures that define how an institution develops, deploys, monitors, and retires AI systems. AI Use Case Inventory: a structured record of all AI systems in active use, treated by the Lexicon as a baseline governance requirement. HITL: human-in-the-loop, required for any AI that influences material decisions. Third-Party AI Risk: risks from vendor AI systems, assessed under TPRM. Explainability: the capacity to provide human-understandable reasons for AI outputs.
What this means for your daily work: When reviewing or drafting AI governance documents, use the AIEOG Lexicon definitions verbatim. It is not about compliance theater — it is about alignment. Examiners who know the Lexicon will look for the Lexicon’s terms. Policies that use equivalent but different language create avoidable friction.
The five frameworks, in one table.
- SR 11-7
- Model Risk Management Guidance
- TPRM
- Third-Party Risk Management
- ECOA / Reg B
- Equal Credit Opportunity Act
- BSA / AML
- Bank Secrecy Act / Anti-Money Laundering
- AIEOG Lexicon
- AI in Financial Services Vocabulary
Federal Reserve / OCC · Critical
“If the AI touches a credit decision, you must be able to explain it.”
Interagency · High
“Every AI vendor requires a risk review before deployment — including tools already running.”
CFPB · Critical
“"Score too low" from a black-box model is not a legal adverse action reason.”
FinCEN · Critical
“AI-generated SAR recommendations require human review before filing. The AI is not accountable. You are.”
US Treasury / FBIIC / FSSCC · High
“Regulators will use these definitions in examinations. Align your policy language now.”
The governance gap is not a knowledge gap.
The Gartner Peer Community survey (via Jack Henry & Associates, 2025) found that 55% of financial institutions have no AI governance framework yet. This is not primarily a knowledge problem — the frameworks described above are publicly available and free to read. It is a translation problem: the distance between a 40-page regulatory guidance document and a practical policy a loan officer can follow is not bridged by publishing more guidance.
The most effective governance frameworks at community banks are the ones written for the people who do the work. A three-tier data classification that a teller can apply in 20 seconds. An adverse action checklist that a loan officer can complete before issuing a denial. An AI use case inventory that operations staff can maintain without a law degree. That is the translation work.
The 55% of institutions without a governance framework are not waiting for more regulation. They are waiting for someone to make the existing frameworks usable.