Fraudsters Hack the Signals Behind Identity Trust
Watch more: The Digital Shift With Regula’s Henry Patishman
The cybersecurity arms race, true to its name, is turning into a sprint.
Malware evolves by the hour, ransomware syndicates operate like multinational firms, and state-backed actors are continually probing and even intruding across the edges of critical infrastructure.
As a result, the front line for financial institutions is shifting. Attackers today are not breaking the identity systems powering financial services outright. Instead, they are working within them, manipulating the signals those systems rely on to establish trust.
“We’re seeing a clear move in fraud from artifact manipulation to now signal manipulation, and most worrying, it’s now going to system manipulation,” Henry Patishman, executive VP, Identity Verification Solutions at Regula, told PYMNTS.
Simply put, fraud is shifting from faking inputs to shaping outcomes. Attackers combine valid-looking identity signals — from IDs, biometrics, device and behavioral data across onboarding and authentication — and exploit how systems make decisions. The result: identities that look legitimate, but don’t actually correspond to a real person.
That emerging threat, one that industry insiders are beginning to call “identity signal manipulation,” is forcing a rethink of how digital identity is verified across payments, banking and online platforms.
“In a world where everything can look real, trust will depend on how well you understand the integrity of identity signals, not just their appearance,” Patishman said.
The question is no longer whether a user was verified at a moment in time, but whether their identity remains consistent and trustworthy across sessions, devices and behaviors.
Fraudsters Move From Spoofing to Orchestration
What distinguishes this new threat from earlier forms of fraud is not just sophistication, but orchestration. Attackers are no longer spoofing a single attribute; they are coordinating multiple signals simultaneously to construct a convincing digital identity.
A recent case in the Netherlands cited by Patishman can help to illustrate the shift. A single attacker successfully opened nearly 50 bank accounts using real stolen passports, a live participant in front of a camera, and subtly altered biometric inputs. Each individual component appeared legitimate and collectively they were able to tell a false story that banking fraud systems failed to detect.
“The interesting part was that real documents, real selfies, and a real human were all part of the process, but manipulated just enough to pass the controls,” Patishman said.
And that is the emerging essence of signal manipulation: not falsifying identity outright, but bending reality just enough to slip through fragmented defenses.
What makes these attacks particularly effective is how identity verification systems are structured. Many still treat verification as a one-time event, like a checkpoint at onboarding or login. But in practice, identity is expressed across a sequence of interactions — from onboarding to login to transactions — each generating its own signals.
“One of the biggest shifts is that identity verification is no longer a single decision,” Patishman said. “It’s a chain of signals over time.”
Fraudsters have adapted accordingly. Rather than attempting a single high-risk breach, they probe for weak links across that chain, such as by injecting synthetic video into a camera feed, replaying biometric sessions, or subtly altering device data. Each signal may pass independently. The deception now lies in how they fit together.
Trust at the Source, Not the Outcome
The fraud realities facing financial institutions is also driving a deeper conceptual change in identity verification, one that is moving away from validating outcomes and toward validating origins.
The emerging model emphasizes what might be called “signal provenance”: understanding where data comes from, how it was captured, and whether it has been altered along the way. It’s an approach that borrows from legal frameworks, where evidence must be supported by a clear chain of custody.
“Trust can’t be inferred from outcomes anymore. It has to be proven at the point of capture,” Patishman said. “A biometric match proves similarity, not authenticity.”
Traditional systems often focus on results such as a document or face match. But attackers increasingly manipulate inputs before those checks even occur, producing outputs that appear valid but are fundamentally compromised.
“Biometrics don’t fail,” Patishman said. “But the systems around them can fail to verify their authenticity.”
In practice, systems may correctly match biometric data — but still fail to detect that the input itself has been manipulated or synthetically generated. The result is a valid match built on compromised data.
At the same time, while many organizations have invested in strong individual tools, those tools often operate in isolation.
“Fraud doesn’t happen in one signal, it happens in gaps between them,” Patishman said. “The next generation of identity verification isn’t a better check with a better tool. It’s a coordinated system that understands how signals relate to each other.”
This means a document, a biometric check, and device or behavioral data may each appear valid on their own, but still not belong to the same identity. When these signals are verified separately, those inconsistencies can go unnoticed.
Closing those gaps requires orchestration: systems that correlate signals across sources and over time, identify inconsistencies, and adapt dynamically to risk. Rather than a series of independent checks, identity verification becomes an evidence-based process.
Ultimately, as Patishman said, “security is now much more reliant on system and process design, not just tool accuracy.”
The post Fraudsters Hack the Signals Behind Identity Trust appeared first on PYMNTS.com.