Regula Warns Digital Identity Has a Human Problem
Identity and authentication in financial services are having a moment.
The marketplace must move beyond the one moment where a human proves who they are, gains access and proceeds.
“Every identity and authentication system in financial services was built on a foundational assumption: One human initiates an action, one system verifies that person, and it happens at one specific moment in time,” Henry Patishman, executive vice president, identity verification solutions, at Regula, told PYMNTS.
While that paradigm has held for decades, Patishman said that “unfortunately, today, it no longer does.”
Under the pressure of autonomous systems, the assumptions underpinning identity, authentication and trust increasingly appear to be fundamentally broken. The disruption is not theoretical. Artificial intelligence agents are already reshaping the threat landscape in ways that legacy systems were never designed to handle.
“The next chapter of fraud prevention is not just identity verification,” Patishman said. “It’s identity governance in a world where not every actor is human.”
The End of Static Identity as Firms Know It
AI systems can assemble synthetic identities, submit applications en masse and probe onboarding systems at machine speed. More troubling, AI agents don’t just execute attacks; they learn from them. Fraud, once constrained by human effort and coordination, is now accelerating in scale and sophistication.
“What used to be a one-time fraud attempt can become an automated feedback loop,” Patishman said, describing systems that iterate continuously, refining their approach based on which attempts succeed or fail.
“What once required organized fraud rings can now be fully automated,” he added.
The result is a dramatic compression of cost and time for attackers, and a corresponding expansion of risk for institutions.
Authentication, too, is undergoing a fundamental shift. In a world of autonomous agents, a single login or credential check is no longer sufficient. The focus moves from access to assurance by ensuring that the identity behind an action remains valid, current and uncompromised throughout the interaction.
“Authentication can no longer rely on a single login,” Patishman said. “In an agent-led environment, it also needs to focus on identity assurance throughout the whole journey.”
Higher-risk actions may require fresh proof of identity, such as re-confirming a face match, validating a document again or checking whether trusted identity signals suddenly changed.
The scale and speed of AI-driven activity are placing unprecedented strain on verification systems. Surges in suspicious activity can overwhelm manual review teams, slowing down legitimate customers and increasing friction. Document fraud is becoming more sophisticated, and biometric attacks are more persistent. As a result, institutions will likely favor providers with deep expertise in document and biometric verification, particularly where higher-risk actions require stronger assurance.
To keep pace, institutions can move toward more integrated approaches, such as by combining document analysis, biometrics, device intelligence and behavioral signals in real time.
The Next Chapter of Identity as Governance
At the same time, the concept of authority itself is becoming more complex. If an AI agent initiates a transaction, the question becomes a layered inquiry.
“The question is no longer just who the customer is; it becomes who’s acting under what permission,” Patishman said. “Permission is the key word here, and it must be explicit.”
That evolution is most visible in how financial platforms must rethink delegation and consent. It is no longer sufficient to confirm that a user logged in or approved a transaction. Platforms must now prove that an AI agent was explicitly authorized, that the user provided informed consent, and that the agent operated strictly within defined limits. For example, an AI assistant may be allowed to pay recurring bills, but not open a new account or move funds. If permission is vague, assumed authority can easily be mistaken for legitimate authority.
“Platforms will need permission models that are precise, time-limited and easy to revoke,” Patishman said. “Trust will depend on whether companies can clearly show why a decision was made and prove the process was accurate.”
What emerges is a new blueprint for identity in financial services, where the question is no longer whether institutions can verify who a user is, but whether they can govern what autonomous systems are allowed to do in their name.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post Regula Warns Digital Identity Has a Human Problem appeared first on PYMNTS.com.