Fake Normal Is Becoming the New Fraud Threat
Watch more: What’s Next in Payments With Paymentology’s Tim Joslyn
The “I” in “AI” doesn’t always just stand for intelligence. Increasingly, across the security sector, it can also stand for “Identity.” As in, Artificial Identity.
With AI continuing to blur the line between human and machine behavior, identity verification is entering a new era defined less by passwords and biometrics and more by patterns, context and programmable trust.
“We see the troubles that banks and FinTechs are having with how they enforce identity,” Tim Joslyn, chief technology officer at Paymentology, told PYMNTS during a discussion for the March edition of the “What’s Next In Payments” series, “How Will AI Change Identity?”
“There are lots of examples with banks all across the world who may have previously relied on voice checks and are now having to step back from that,” he said.
Digital security systems have traditionally relied on a set of familiar signals to determine whether a user is legitimate. But as AI models become capable of mimicking not just human speech or appearance but human behavior itself, those signals are eroding.
“I think voice has already failed,” Joslyn said. “If you’re a bank and you’re aligned on voice, then potentially you’re already in trouble.”
“Selfies and videos are not going to be far behind,” he added. “Even with holding up a copy of today’s newspaper or timestamping a video, all of this now can be easily faked.”
AI Is Forcing Banks to Rethink Security
Fraud detection systems have spent the last decade leaning heavily on behavioral analytics. Instead of asking users to prove who they are directly, platforms observe how they behave: how they move a mouse, how quickly they type, how they navigate an application and more.
That data underpins much of modern fraud detection infrastructure. But AI-driven bots are now beginning to replicate those signals.
“If you start to have bots that don’t look like bots and actually do look like humans, they’re typing with human-like cadence, maintaining long live sessions, that’s where the real fraud starts to come in,” Joslyn said. “Fake normal behavior worries me the most.”
Detecting those threats requires a different mindset. Instead of examining isolated signals, platforms must look for patterns across massive datasets.
“You try to work out if you’ve seen identical data across multiple journeys. Something might look human, but if you see it a thousand times, it’s probably not human,” he said.
From Binary Identity to Continuous Trust
If AI is undermining traditional authentication checkpoints, the next phase of identity security may rely less on single moments of verification and more on continuous trust scoring.
For example, a payment network like Visa or Mastercard might gradually accumulate behavioral signals about how a cardholder spends, what devices they use, and where transactions typically originate.
“Over time, it builds up an authentication profile of you,” Joslyn said. “So, when you make payments, it knows that actually is you because it’s building real-time trust scoring.”
That can be easier said than done. Today’s systems largely operate on a binary model: a user is either authenticated or not.
But Joslyn stressed that, for banks and merchants still relying on older authentication systems, the transition to this new model doesn’t necessarily require a complete overhaul. Many organizations already possess the necessary tools — from in-app authentication to 3-D Secure payment verification. The challenge is using them intelligently.
“The stuff is there,” he said, advocating a gradual, risk-based approach that adjusts authentication requirements depending on the transaction. “Low-risk transactions should flow. High-risk ones should require step-up authentication.”
Tokenization Becomes a Security Backbone
While behavioral analytics evolves, another technology may quietly become a key backbone of AI-era payments security: tokenization. Tokenization replaces sensitive payment information, such as card numbers, with unique digital tokens. Merchants process the token instead of the original card data, dramatically reducing exposure to breaches.
“I think payment tokenization is probably one of the most underappreciated fraud controls,” Joslyn said.
A token isn’t just a substitute for a card number. It can carry rules and attributes that govern how it can be used, and those programmable controls may make tokens particularly valuable in a future where AI agents conduct transactions on behalf of consumers. Rather than granting an AI assistant full access to a bank account, for example, users could issue tightly restricted tokens with predefined spending limits.
“Using a token is like giving someone a five-dollar bill. Not using tokenization is like giving them your card, your PIN number, and access to your entire bank account,” Joslyn said. “When agents are tightly scoped and using tokens with controls on what they can do, it becomes much easier to trust them.”
As AI reshapes commerce and payments with automated shopping and conversational banking, the rules of identity are also being reshaped in turn.
The post Fake Normal Is Becoming the New Fraud Threat appeared first on PYMNTS.com.