AI-Powered Scams Force Banks Into Real-Time Fraud Defense
Watch more: The Digital Shift: Block’s Brian Boates
Fraud has a new face every day.
Even a highly informed customer can fall victim if an attacker gains access through compromised credentials or weak authentication elsewhere in the system.
“Scammers are having a lot of success trying to take advantage of individuals,” Block Chief Risk Officer Brian Boates told PYMNTS. “…But just as often, they’re trying to gain access to legitimate customer accounts on their own by stealing identity credentials.”
This dual strategy of manipulating people while infiltrating infrastructure has raised the stakes for financial institutions. It also complicates the traditional division of responsibility between consumer vigilance and institutional security.
A growing body of evidence suggests that today’s most effective scams are succeeding not by overpowering technology, but by quietly subverting trust. It’s trust between consumers and institutions, and trust embedded in the digital rails that move money at unprecedented speed.
That makes scam detection, prevention and recovery critical exercises for financial institutions and FinTechs.
When Trust Becomes the Attack Surface
The persistence and growth of scams raise a deeper question. Why do so many fraud attempts still succeed, even as defenses grow more sophisticated? The answer is, frequently, the human element of trust. One of the most revealing insights shaping cyber defense today is that scams are less about technical exploits and more about exploiting expectations.
Fraudsters impersonate banks, payment apps, employers and family members, using the credibility of familiar brands and channels.
“It’s really coming from all angles, which just makes it even that much harder for customers to keep their accounts safe because it might not be just up to them not falling victim to a scam, but it’s also the institutions that they’re working with and how secure those are as well,” Boates said.
This erosion of trust requires a fundamentally different defensive posture, he said.
See also: 2025 State of Fraud and Financial Crime in the United States
Historically, many fraud programs have relied on retrospective analysis. Transactions are reviewed after funds move, patterns are flagged days or weeks later, and investigations determine whether reimbursements are warranted. While this approach may satisfy regulatory requirements, it does little to prevent the initial harm.
“It’s one thing to find the bad actors after the fact and then make sure that you take the appropriate action,” Boates said. “But what’s much more effective is investing in more real-time technology. It requires a big investment in a lot of technology to keep things secure and to keep customer accounts safe.”
At Block, that shift has meant building machine learning systems capable of evaluating transactions as they happen, drawing on thousands of behavioral signals tied to the customer and the payment itself. The goal is not simply to identify fraud, but to intervene before it succeeds and money leaves the account. Before trust is irreparably broken.
AI, Identity and the Authenticity Problem
Artificial intelligence is becoming perhaps the biggest throughline for scammers. AI-powered voice cloning and chat tools have made impersonation cheaper, faster and more convincing, blurring the line between legitimate and fraudulent interactions.
The innovation is forcing a rethink of identity verification itself. On one hand, consumers are gradually becoming more aware of sophisticated scam tactics. On the other, the raw materials of identity like Social Security numbers, government IDs and photos, have been widely exposed through years of data breaches.
“This is a really difficult part of the problem,” Boates said, adding that, as a result, institutions are moving beyond static identifiers toward techniques like selfie verification and liveness detection, which require real-time proof that a human being is present and interacting.
“It basically forces us to invest a lot in evolving our authentication systems,” Boates said, adding that identity is no longer a one-time checkpoint but an ongoing assessment informed by behavior, device signals and contextual risk.
It is also emblematic of the broader transformation underway in financial risk management, where static rules are giving way to probabilistic, continuously learning systems.
Read also: Banks Face a Fraud Spike as Attacks Outpace Legacy Systems
Resilience Through Smart Frictions, Not Blanket Barriers
One of the most distinctive elements of Block’s own approach is its use of what the industry increasingly calls smart friction. Rather than slowing every transaction or layering on indiscriminate warnings, the company aims to intervene only at moments of elevated risk, when a customer is most likely to benefit from a pause.
“You don’t want to catch the customer afterwards with an email reminding them of things to look out for, but in the moment to issue that real-time warning,” Boates said.
The effectiveness of this strategy depends on precision. Overuse of friction can quickly backfire, training users to ignore alerts or abandon platforms altogether. Block’s models, trained on historical data and continuously refined, are designed to be selective.
“If you warn everybody all the time, people become numb to it, and they just click through,” Boates said.
By intervening “surgically,” Block sees a meaningful rate of abandoned or canceled payments, transactions that might otherwise have resulted in losses, he said.
The future of scam prevention will not be won after the fact, Boates said. It will be decided in milliseconds, at the precise moment when a system recognizes risk, introduces friction and gives trust a fighting chance.
The post AI-Powered Scams Force Banks Into Real-Time Fraud Defense appeared first on PYMNTS.com.