It’s not controversial to state that AI is shaping up to be the most disruptive technology in our lifetimes. In financial services, it is already redefining how we transact, manage risk and extend financial access. But AI’s pace of change has already outrun the frameworks designed to govern it — risk models, compliance structures and legal doctrines built for a human-driven economy are being asked to manage an increasingly autonomous one. The guardrails were never designed for this, and the gap between the governance we need and the governance we have is rapidly bifurcating.
That gap shows up most visibly in the question no one has fully answered: when an AI agent goes rogue (completing the wrong transaction, breaching a spending limit or initiating a payment it was never authorized to make), who is responsible? We have established liability frameworks for human fraud and misuse, but agent failures don’t neatly fit those boxes — an agent acting outside its mandate is not necessarily in the wrong if it’s merely following code. However, “wrong” in real time across millions of transactions globally creates a category of risk that has outgrown existing governance frameworks.
Many organizations are responding to this risk by investing in better models and automation, but they are collectively ignoring the most critical link in the governance chain: humans. While AI use may be commonplace in today’s financial institutions, the majority of professionals are only trained to a superficial level and barely know how to prompt an LLM, much less explain how it arrives at its results. Put bluntly, if the humans responsible for governing artificial intelligence do not know how it works, then governance becomes performative rather than effective. At Trulioo, we see this shift firsthand as organizations scale identity verification and transaction monitoring globally.
Closing the governance gap requires more than cursory prompt training; users need comprehensive AI literacy across risk, compliance and operations. This means going beyond how to use the tools, to how to read and de-risk their output. It is therefore incumbent upon the organizations building verification, authentication and fraud-prevention systems to also act as stewards of these frameworks, meeting the challenge with the right levels of training for the human overseers.
As agentic transactions scale, these architectures will need to function at both the issuer and consumer level, ensuring every agent acts within the boundaries its principal has set. This will be no trivial task, but in financial services or payments, where a single authorization can move money, extend credit or deny access to thousands of people, a robust trust framework for AI will be imperative.
Meeting these demands will require not just humans in the loop, but the right humans in the loop. With so much attention being paid to AI and its awe-inspiring potential, it’s vital that we don’t overlook the operators that provide actual intelligence.