FinCEN Warns U.S. Banks About AI Deepfake Frauds
Law enforcement is warning U.S. banks about the danger of AI-powered identity fraud schemes.
The Treasury Department’s Financial Crimes Enforcement Network (FinCEN) issued an alert Wednesday (Nov. 13) to help financial institutions spot scams associated with the use of deepfake media created using generative artificial intelligence (GenAI).
“While GenAI holds tremendous potential as a new technology, bad actors are seeking to exploit it to defraud American businesses and consumers, to include financial institutions and their customers,” said FinCEN Director Andrea Gacki. “Vigilance by financial institutions to the use of deepfakes, and reporting of related suspicious activity, will help safeguard the U.S. financial system and protect innocent Americans from the abuse of these tools.”
According to the alert, FinCEN has seen an increase in suspicious activity reporting by financial institutions that describe the suspected use of deepfake media, especially the use of fraudulent identity documents to evade identity verification and authentication methods.
FinCEN said its analysis of banking data suggests criminals have used GenAI to create falsified documents, photographs and videos to get around customer identification and verification controls.
For example, some financial institutions have reported that criminals used GenAI to alter or generate images used for identification documents, such as driver’s licenses or passports.
“Criminals can create these deepfake images by modifying an authentic source image or creating a synthetic image,” the agency said. “Criminals have also combined GenAI images with stolen personal identifiable information (PII) or entirely fake PII to create synthetic identities.”
Deepfake images aren’t the only way criminals are using AI to further their efforts. As PYMNTS wrote in September, AI chatbots, often praised for their productivity benefits, now threaten cybersecurity as criminals employ them to develop sophisticated malware.
Researchers at HP Wolf Security, that report said, have discovered one of the first known instances where attackers used generative AI to write malicious code for spreading a remote access Trojan.
“This trend marks a shift in cybersecurity, democratizing the ability to create complex malware and potentially leading to a surge in cybercrime,” PYMNTS wrote.
“If your company is like many others, hackers have infiltrated a tool your software development teams are using to write code. Not a comfortable place to be,” Lou Steinberg, founder and managing partner at CTM Insights and former CTO of TD Ameritrade, told PYMNTS in September.
The post FinCEN Warns U.S. Banks About AI Deepfake Frauds appeared first on PYMNTS.com.