Criminals are using artificial intelligence in combination with websites, social media and other tools to carry out scams and other malicious operations, OpenAI said Wednesday (Feb. 25) in its latest threat report.
One of the latest threats the company disrupted involved an individual connected to Chinese law enforcement who tried to use its AI model to plan covert influence operations (IO) against domestic and foreign adversaries, according to the report.
In at least one case, the OpenAI model refused to comply, so the individual and others connected with Chinese law enforcement used other platforms and AI models.
“The user described the operations as using dozens of tactics, ranging from abusive reporting of dissidents’ social media accounts, through mass online posting, to forging documents and impersonating US officials to intimidate critics,” the report said.
In another case highlighted in the report, a network of accounts in Cambodia used AI to pose as a fake dating agency and target young men in Indonesia in a romance scam, per the report.
“Unusually, this scam network combined manual ChatGPT prompting and an automated AI chatbot to try to entrap its targets,” the report said.
In a third example of malicious attempts to use AI models, a content farm linked to Russia translated and generated content to be posted on social media, according to the report.
This group sometimes used ChatGPT to generate social media comments which were then posted by accounts that appeared to be located in different parts of the world.
In these and other cases, AI-generated content did not appear to be the decisive factor in whether a campaign was successful, the report said. Targeted ads or popular social media accounts played a more important role.
“This underscores the importance of studying the nature of threat actors and the ways in which they behave, as well as the content they generate,” the report said.
The PYMNTS Intelligence report “2025 State of Fraud and Financial Crime in the United States” found that the rising sophisticated of fraud is rewriting institutional risk. At the same time, financial institutions are deploying AI, machine learning and advanced behavioral analytics to close any gaps in their defenses.