Building trustworthy AI governance
Change is coming at us fast — especially for those managing the legal, operational, compliance, and reputational risks that accompany artificial intelligence (AI) in banking. It can be overwhelming or risky, if your governance program isn’t evolving at the same pace.
AI’s rapid advancement is reshaping governance expectations. Staying current with new capabilities is only part of the challenge — governance teams must also continually refine the policies, processes, and controls that keep AI use responsible and effective. Those that adapt quickly will be best positioned to capture AI’s benefits while minimizing risk.
Existing federal guidance
Fortunately, financial institutions don’t have to start from scratch — several established federal frameworks already offer a strong foundation for AI governance. Several existing federal frameworks — including the Federal Reserve Bank’s (FRB) Supervisory Letter SR 11-07 guidance on model risk management, National Institute of Standards and Technology’s (NIST's) AI Risk Management Framework, and Executive Order (EO) 14179— already provide a foundation banks can adapt. These help provide structure and alignment with the spirit of recent federal directives — particularly the Office of Management and Budget’s (OMB) Memoranda M-25-21 and M-25-22, and the Government Accountability Office’s (GAO) Report GAO-25-107933.
AI governance matters
AI governance is more than a regulatory checkbox. Poorly managed AI, such as models trained on flawed or biased data, can lead to biased lending, privacy violations, cybersecurity breaches, reputational damage, and regulatory penalties. Federal agencies are now required to implement risk management practices for “high-impact AI,” defined as systems that significantly affect rights, safety, or access to critical services. Analogizing these concepts to banks, this may include AI used in credit scoring, fraud detection, customer service automation, and underwriting.
The federal government’s emphasis on transparency, accountability, and human oversight reflects growing public and regulatory expectations. Banks that align with these principles will be better positioned to earn trust, avoid costly missteps, and scale AI responsibly. And while there is no comprehensive AI-specific federal law, banks should manage AI-related risks under existing federal regulations like the Gramm-Leach-Bliley Act (e.g., data protection), the Dodd-Frank Wall Street Reform and Consumer Protection Act (e.g., systemic risk and consumer protections), and the Bank Secrecy and USA PATRIOT acts (e.g., transaction monitoring). The Federal Financial Institutions Examination Council's (FFIEC’s)Information Security Handbook also emphasizes board-level oversight, continuous threat intelligence and third-party risk management.
State-level laws and regulations also influence bank AI governance. Some address AI directly and others regulate it in their privacy, cybersecurity, consumer-protection provisions or other obligations that may apply to banks. AI governance in financial institutions follows the same principles as other governance disciplines—establishing clear policies, roles, and controls to manage risk.
Defining AI governance in financial institutions
Unlike areas such as financial reporting under the Sarbanes–Oxley Act, however, there is currently no explicit federal framework governing the use of artificial intelligence. In the absence of comprehensive regulation, banks must proactively adapt existing federal guidance—such as model-risk, cybersecurity, and privacy frameworks—to manage the attendant risks of rapidly evolving AI technologies.
State developments in AI regulation
With no direct federal framework, states are stepping in. While many proposals remain narrow in scope, several touch on transparency and consumer awareness relevant to banking. States have introduced AI-related bills ranging from attempts at comprehensive regulation to narrowly drafted laws targeting specific use cases, such as prevention of employment discrimination, use in public schools, and political advertising. The closest theme in state laws addressing specific use cases of artificial intelligence in banking is requirements in California and Utah that consumers be made aware when they are interacting with AI.
The potential application of state-specific regulations to your bank should be reviewed when creating policies or when designing AI products and tools. A useful resource for U.S. state-level regulations and other information on the use of AI can be found on the International Association of Privacy Professionals (IAPP) website.i
Colorado’s AI Accountability Act, set to take effect on June 30, 2026, is currently the most comprehensive state-level attempt to regulate AI. The law focuses on transparency, requiring that consumers be informed when they interact with AI or when AI is used in decision-making. Legislators are already debating changes that could reduce the scope, or enact greater consumer protections.
For banks, the takeaway is that preexisting laws still apply. Two key truths come to mind when considering how banks should approach AI governance. First, laws and regulations that applied prior to the advent of AI continue to apply (e.g., ECOA, FCRA, CFPA, GLBA). Second, it is a best practice for banks to update existing policies or create new ones that control for the added risks of using generative and agentic AI; quality control continues to matter. Risks must continue to be recognized, assessed, monitored, and controlled.
Existing consumer protection laws still apply
Regulators have been clear that existing laws remain fully applicable, regardless of the technology used. In a May 2022 circular, the CFPB stressed the importance of providing applicants the specific reasons for denying a credit application.
Although the CFPB later issued additional guidance addressing requirements around adverse action notices when using complex algorithms in credit decisioning, that circular (2023-03) was among the items rescinded in 2025ii. However, the bureau has continued to emphasize that existing consumer protection laws apply to all technologies. Its 2024 report on chatbots in consumer finance—guidance that remains active—highlights ongoing regulatory concern about fairness, transparency, and accuracy in AI-enabled consumer interactions. In short, the CFPB has maintained that financial institutions are fully responsible for ensuring compliance, regardless of whether decisions or communications are made by humans or algorithms.iii
AI-related risks vary widely across industries. For banks, the greatest concerns are reputational, operational, and compliance-related, not physical harm.
Practical steps to mitigate risk
Banks can reduce risks by maintaining an inventory of every active or proposed AI use case, documenting associated risks, and implementing appropriate controls. These are the fundamentals of good compliance governance, but identifying AI-specific risks often demands more creativity and collaboration than usual.
Common AI use cases for banks, their associated risks and suggested controls are provided in Figure 1.
Figure 1: AI use cases, risks and controls
| Use Case | Associated Risks | Risk Mitigation Controls |
|
Generative AI for Customer Service (e.g., chatbots, virtual assistants) |
|
|
|
Agentic AI for Fraud Detection and Prevention |
|
|
|
Generative AI for Marketing Content Creation |
|
|
|
Agentic AI for Personalized Financial Advice |
|
|
|
Generative AI for Document Drafting (e.g., loan agreements, disclosures) |
|
|
|
Agentic AI for Operations Automation (e.g., loan processing, KYC) |
|
|
Oversight and accountability
As artificial intelligence becomes more deeply embedded in organizational processes, oversight and accountability take center stage. The role of an AI Governance Officer (or equivalent) becomes increasingly critical. This individual serves as a key figure in ensuring that AI systems and their use are not only effective but also strategically, ethically and legally sound, helping to steer the bank through the potentially complex landscape of AI governance, balancing innovation with responsibility.
Federal guidance already points to a clear model for oversight. The federal policies we noted earlier require federal agencies to assign a Chief AI Officer (CAIO) to oversee AI governance, risk management, and strategy. In the banking sector, the Chief Compliance Officer (CCO) stands out as uniquely positioned to oversee a bank’s AI governance efforts. This is not only due to their deep understanding of regulatory frameworks and risk management, but also because AI governance intersects with core compliance responsibilities like data privacy, ethical use, transparency, accountability, and third-party risk.
CCOs already manage regulatory risk, coordinate across departments, interact with boards, and understand the legal implications of data-driven decisions. The GAO report, for example, highlights the importance of cross-functional leadership and interagency coordination — skills compliance professionals routinely exercise. By taking a leadership role in AI governance, compliance officers can ensure that AI systems meet ethical standards, regulatory requirements, and internal controls. Their involvement also helps bridge the gap between technical teams and executive leadership, fostering a culture of responsible innovation. In short, compliance officers can help banks adopt AI responsibly — shaping innovation while safeguarding institutional integrity.
Responsibilities of the AI governance function
The governance function in a U.S. bank, typically headed by the board of directors, is responsible for setting the strategic direction of the institution and ensuring compliance with laws, rules and regulations that apply to its business model. This includes overseeing the ethical, secure, and compliant deployment of artificial intelligence technologies across the organization. That work begins with knowing where, how and with whom the bank uses or wants to use AI and ensuring those initiatives align with the bank’s strategic goals, risk appetite, and ethical standards.
Key responsibilities include:
1. Maintaining regulatory compliance.
A key responsibility is maintaining regulatory compliance by keeping current with federal and state laws, including those related to topics like privacy, algorithmic fairness, cybersecurity, and consumer protection.
2. Setting strategy and policies.
A bank’s governance function is also responsible for developing and enforcing policies that reflect applicable regulations and ethical guidelines, ensuring consistent implementation across the institution. This also means aligning AI initiatives with broader enterprise risk management and model-risk frameworks.
3. Cross-functional collaboration
AI governance requires deep cross-functional collaboration across all areas of the bank, such as risk, compliance, IT, legal, operations, and most importantly, senior leadership and the board. Effective AI governance requires cross-functional collaboration, as no single area can fully assess all associated risks.
4. Risk assessment and mitigation
Risk assessments are another key function of AI governance, involving the identification of potential compliance issues such as bias, lack of transparency, or misuse of data, and recommending strategies to mitigate these risks.
5. Training and awareness
Additionally, the AI governance function plays a vital role in educating employees about responsible AI practices, compliance obligations, and fostering a culture of awareness and accountability. Because employees are already using AI tools in their daily work, training should be accompanied by a clear policy that restricts use to bank-approved, monitored systems. Periodic attestations of understanding and compliance can reinforce accountability. As with other high-risk compliance obligations, they may also consider requiring a periodic attestation of understanding of the policy and their adherence to it.
6. Auditing and reporting
Regular audits and reporting are also part of the AI governance team’s remit, providing oversight and transparency to the board and senior leadership and, as necessary, to regulatory bodies. This reinforces trust in the governance framework and demonstrates the institution’s commitment to responsible AI oversight.
Recommended actions
The following steps draw from leading federal resources that can help financial institutions build effective AI governance programs. While some, like the Federal Reserve’s SR 11-07, are binding for specific entities, others, like the NIST Framework and the GAO report are voluntary but influential and are often used to help shape internal policies and risk frameworks. Together, they provide a strong foundation for structuring internal policies and controls.
1. Designate a Senior AI Leader
First, designate a CAIO or similarly titled position. Assign this position to a senior compliance or risk leader to oversee AI governance, consistent with federal agency guidelines and applicable state standards. In many banks, the Chief Compliance Officer (CCO) is best positioned for this role given their cross-departmental oversight and deep understanding of regulatory and ethical frameworks.
2. Create and Maintain an AI Use-Case Inventory
Next, create a comprehensive inventory of the bank’s current AI use cases Interview your different departments to find out how they are using AI. Engage each department to identify how AI is currently being used or tested — including lesser-known areas such as procurement, vendor management, or HR — to ensure full visibility. You may find, for example, that the bank is paying for multiple products/solutions that essentially do the same or very similar things utilizing AI. Look for overlapping tools or redundant vendors to minimize the number of third parties providing AI-related services. Keep only those relationships with dependable, well-known vendors that have transparent risk controls, and a proven track record and strong reputation with your bank and/or peers.
3. Implement Ongoing Controls and Risk Assessments
Once the inventory is established, implement ongoing controls to maintain and update it regularly. This is especially important for those use cases with high impacts on customer rights, safety, or access to financial services. Conduct risk assessments w to determine what levels of ongoing control and oversight may be needed. Once assessed, apply strong risk management practices to ensure the application of pre-deployment testing, impact assessments, and ongoing risk-based monitoring.
4. Align Existing Policies with AI-Specific Risks
Ensure internal policies relevant to the use of AI such as data governance, IT infrastructure, privacy, and cybersecurity reflect AI-specific risks and controls. Establish oversight mechanisms such as an internal AI governance board or cross-functional team to coordinate decision-making and ensure accountability. Regularly solicit feedback from your stakeholders such as customers, employees, and external experts.
5. Publish an AI Strategy to Promote Transparency
OMB M-25-21 is emerging as a possible de facto standard for responsible AI in the private sector as banks and other financial institutions use it as a benchmark to help shape their own AI governance programs. Banks can elect to align with this approach by publishing an AI strategy as directed for federal agencies by OMB M-25-21. If used, it could include current and planned AI use cases, governance structures, risk protocols, and plans for workforce development. Making your AI strategy publicly accessible promotes greater transparency and builds trust with stakeholders by demonstrating a commitment to responsible and ethical AI practices.
Key elements of an AI governance program
An effective AI governance program is anchored in oversight and accountability, with clearly defined structures that ensure responsible decision-making and risk management. At the highest level, the board of directors must fully approve the governance framework, signaling organizational commitment and ensuring alignment with strategic, ethical, and regulatory priorities. Senior management plays a critical role in operationalizing this framework, translating board-level directives into actionable policies and procedures.
Cross-functional oversight
Oversight should be carried out by a cross-functional governance committee that include representatives from across the organization such as compliance, legal, data science, IT, risk, and business operations. Consider charging this committee with evaluating AI systems across their lifecycle, from design and training to testing, deployment and monitoring. This step ensures that decisions are made collaboratively and transparently.
Comprehensive documentation is essential to each stage of this process, capturing model development, data sources, decision logic, risk assessments, and mitigation strategies. This documentation supports traceability, facilitates audits, and enables meaningful engagement with regulators and stakeholders.
Phased implementation and measurement
Effective governance programs are built in phases, beginning with a risk assessment and stakeholder mapping, followed by policy development, employee training, and system integration. Key performance indicators (KPIs) — such as model accuracy, fairness metrics, incident response times, and audit completion rates — should be established to measure effectiveness and drive continuous improvement. Case studies from similar organizations or industries can provide valuable insights into successful governance models and common pitfalls.
Embedding ethical and technical safeguards
To ensure responsible AI usage, governance programs should incorporate best practices that reflect both technical and ethical standards. These include maintaining a human-in-the-loop for critical decision-making processes, especially for high-risk use cases. Subject matter expert oversight should be embedded throughout the AI lifecycle to validate assumptions, interpret outputs, and guide ethical use. Where appropriate, organizations may opt for closed-system large language model (LLM) installations to enhance data security, reduce exposure to external risks, and maintain tighter control over model behavior.
Transparency and customer rights
Consider providing customers with the ability to opt out of AI-driven decisions and request human review. Ensure appeal processes are accessible, fair, and timely (OMB M-25-21, OECD AI Principles).
Building a living framework
Ultimately, a governance program should be a dynamic, living framework — approved by the board, championed by leadership, and embedded across the organization. It must evolve alongside technological advancements and regulatory shifts, ensuring that AI systems remain trustworthy, transparent, and aligned with the organization’s mission and values.
Conclusion
AI governance is not optional — it’s foundational to your bank’s future. Federal agencies have laid out a roadmap that banks can adapt to their own operations, starting with leadership, strategy, risk management, and transparency. Banks should take the lead in translating these federal expectations into banking best practices. By doing so, they will not only protect their institutions from risk but also unlock the full potential of AI to serve customers, improve efficiency, and drive innovation. The tools and guidance already exist — what remains is for institutions to act with intention, ensuring their AI use is responsible, explainable, and aligned with regulatory and ethical standards.