Insurance Faces a New Threat That Looks Like a Customer
Watch more: What’s Next in Payments With One Inc’s Kevin Ostrander
The never-ending battle against payments fraud is entering a more complicated phase as artificial intelligence (AI) allows criminals to replicate human activity with unsettling accuracy.
Voices can be synthesized, identities can be fabricated from fragments of real data and automated programs increasingly behave like legitimate customers moving through digital channels.
Those developments are creating new vulnerabilities across financial services and especially insurance payments, according to Kevin Ostrander, chief revenue officer at digital insurance platform One Inc.
Insurance payment systems face particular exposure because they manage both premium payments and claims disbursements. Fraud attempts may involve automated scripts testing card numbers, bots probing payment systems or criminals attempting to access policyholder accounts.
“AI is causing a great challenge around traditional identity checks by mimicking human behavior at unprecedented scale,” Ostrander said during a “What’s Next in Payments” interview with PYMNTS. “We’re seeing fraudsters that are using AI to create synthetic identities that pass basic verification processes.”
Synthetic identities often combine stolen financial information with fabricated personal details. When those identities are convincing enough to pass traditional verification checks, fraudsters can open accounts, access sensitive information or initiate payments that appear legitimate.
“There’s a hyper focus in the industry right now on measuring and detecting anomalies in behavior and data patterns that really ensure even the most sophisticated synthetic identities are flagged and checked before any basic verification process takes place,” Ostrander told PYMNTS.
Fraud That Looks Like Normal Behavior
While fabricated faces and cloned voices draw attention, Ostrander said the most troubling development is the rise of automated behavior designed to imitate legitimate customers.
“Fake faces and fake voices are alarming, but the fake normal behavior is the most concerning and rapidly growing threat,” he said.
AI systems can now reproduce browsing habits, transaction patterns and conversational responses that resemble ordinary customer activity. Because those interactions resemble authentic behavior, the signals of fraud are far less obvious.
To detect those activities, payment providers increasingly analyze how users interact with digital interfaces.
Spotting bots often requires monitoring factors such as transaction speed, mouse movements, navigation patterns and device characteristics. Combined with systems that detect automation or device spoofing, those signals can help distinguish a legitimate policyholder from an automated program attempting to impersonate one.
Behavioral Signals Become the New Security Layer
This evolving threat landscape has made behavioral analytics and biometric indicators more important to identity verification.
Ostrander said financial services companies are increasingly moving beyond traditional authentication tools toward layered security models that combine several methods at once.
“If you look at voice checks, they’re the first to likely fail because there have been significant advancements in voice synthesis technology,” he said.
Instead, organizations are combining multifactor authentication with biometric verification, behavioral analytics and tokenization. Those tools allow payment systems to evaluate risk dynamically while protecting sensitive customer data.
The objective is to create authentication systems that adapt to a user’s profile and behavior. Transactions that appear routine can proceed quickly, while suspicious activity triggers additional verification steps.
Security Without Sacrificing Convenience
At the same time, companies cannot allow security procedures to create excessive friction for legitimate customers.
“It’s a balance,” Ostrander said. “It’s escalating the verification process without disrupting the user experience.”
Adaptive authentication has emerged as one method for striking that balance. Systems can request additional verification, such as one-time codes or biometric checks, only when a transaction appears suspicious.
Tokenization also plays an important role. By replacing sensitive financial data with unique identifiers, tokenization ensures that even intercepted data cannot be reused in fraudulent transactions. Combined with encryption, the technique helps create a more secure environment for payment processing and identity verification.
Preparing for Agentic AI
AI may also reshape payments in other ways. The emergence of agentic AI systems capable of acting on behalf of consumers is prompting caution among insurers and other financial institutions.
Ostrander said many carriers are approaching those capabilities conservatively while evaluating how automated systems might interact with policyholders and payment systems.
For now, the most immediate applications of AI in insurance payments involve fraud analytics and data insights that improve detection and risk management.
Looking ahead, Ostrander believes behavioral biometrics powered by AI will become an essential component of digital identity verification.
“AI-driven behavioral biometrics are essential,” he said. “It’s going to analyze how users interact with devices, things like typing patterns and navigation habits, create unique behavioral profiles and allow us to identify who is on the other side of a financial transaction.”
The post Insurance Faces a New Threat That Looks Like a Customer appeared first on PYMNTS.com.