Banks and payments companies have spent the past few years embedding artificial intelligence into their core operations. They’ve been running fraud detection, credit underwriting, customer service and anti-money laundering programs through systems that in many cases were built faster than the governance structures around them. Regulators are now catching up, and what they are asking for goes well beyond a policy statement.
The U.S. Treasury Department earlier this year published two resources, Artificial Intelligence Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF), that give financial institutions a concrete structure for documenting how their AI systems are governed.
FS AI RMF was developed through a public-private body that included more than 100 financial institutions and government agencies, translating high-level AI principles into something examiners can actually test against.
The framework includes 230 control objectives organized across four functions: govern, map, measure and manage. According to law firm Lowenstein Sandler, in financial services this directly affects underwriting, fraud detection, anti-money laundering monitoring and personalization systems, and regulators will not ask whether a policy exists but will ask for logs and dashboarding.
A bank running an AI-powered loan approval system, for example, would use the framework to document who owns that model, what bias testing has been performed, how performance is tracked over time and what escalation path exists when the system produces an unexpected output.
As covered by ZwillGen, while the resources do not create new legal obligations, for companies using AI in regulated financial services they are likely to become an important reference in examinations, internal audit expectations, third-party oversight and contract negotiations, even where no regulator expressly incorporates them.
For payments companies and FinTechs that route customer interactions through third-party language models or source credit scoring systems from outside vendors, the guidance makes clear that procuring an AI system does not transfer the governance obligation.
A traditional software vendor delivers deterministic functionality, where the same input produces the same output. An AI vendor delivers probabilistic functionality, where outputs can vary, drift or degrade over time, as noted by Swept AI. Existing vendor risk frameworks were not built for that dynamic, and the OECD guidance formalizes the expectation that they need to be.
Treasury also published a companion AI Lexicon to standardize terminology across institutions, regulators, and vendors, addressing the fragmentation that has made cross-functional AI conversations difficult to govern consistently.
Vendor Relationships Come Under Review
The Organization for Economic Co-operation and Development (OECD), a global policy forum, on the same day published its Due Diligence Guidance for Responsible AI, and it extended the governance conversation well past a firm’s own walls. The guidance emphasizes a whole-of-value-chain approach, covering data suppliers, infrastructure providers, financiers and end users, designed to support AI supply chains that are resilient and resistant to disruption.
The OECD guidance outlines a six-step due diligence framework that includes embedding responsible business conduct into policies and management systems, identifying and assessing actual and potential adverse impacts and ceasing, preventing, and mitigating those impacts.
According to law firm Hunton Andrews Kurth, the guidance promotes interoperability between OECD AI risk management standards and other national and international frameworks, including the EU AI Act, meaning enterprises already compliant with those frameworks may find substantial overlap.
Obligations Reach the Model Itself
The European Union is taking the question of accountability further upstream still.
Under the EU AI Act, providers of general-purpose AI models with systemic risk must perform model evaluations in accordance with standardized protocols, assess and mitigate systemic risks, report serious incidents to the AI Office and national authorities without undue delay, and ensure adequate cybersecurity protections for the model and its physical infrastructure The obligations apply whether the model is an open or closed source.
That puts a new category of scrutiny on the foundation model providers that financial institutions increasingly rely on to power their customer-facing and back-office AI products. A FinTech building a payments fraud tool on top of a large external AI model now has a direct compliance interest in whether the model’s provider is meeting its own regulatory obligations.
The regulatory picture across the U.S., Europe and international standard-setters might not be uniform, but the direction is consistent.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.