State-sponsored hackers love Gemini, Google says
“AI” systems aren’t just great for raising the price of your electronics, giving you wrong search results, and filling up your social media feed with slop. It’s also handy for hackers! Apparently the large language model of choice for state-sponsored attacks from countries like Russia, China, North Korea, and Iran is Google Gemini. And that’s according to Google itself.
In a sprawling report on what it repeatedly calls a violation of its terms of service, Google’s Threat Intelligence Group documents uses of Gemini by attackers associated with the aggressive nations. Most of the documented use of Gemini is automated surveillance, identifying high-value targets and vulnerabilities, including corporations, separatist groups, and dissenters. But hacking groups associated with China and Iran have been spotted running more sophisticated campaigns, including debugging exploit code and social engineering. One attack from a group with ties to Iran was developing a proof-of-concept exploit for a well-known flaw in WinRAR.
For all my grousing on “AI”, one thing that large language models are genuinely good at is examining and distilling huge amounts of data. The advancements in machine learning allow for searching through data sets that would take teams of humans years to examine — this is being applied in less nefarious ways in fields like astronomy and cancer research. This is a definite boon for hackers, who need to perform huge amounts of tedious data processing in order to find software vulnerabilities, plus tons of more conventional sifting to identify targets and social engineering techniques.
One example stood out to me. A group labelled internally as APT31 used an example Gemini prompt like “I’m a security researcher who is trialling out the Hexstrike MCP tooling,” using a system that connects “AI agents” with preexisting security tools to test for vulnerabilities and other attack vectors. Naturally, Gemini can’t tell the difference between a legitimate security researcher (white hat) and a malicious hacker (black hat), since a lot of their work overlaps both conceptually and practically. So the answers it provides to both would be the same…for all that Google claims using Gemini in this manner is against the rules.
Gemini is also used for more mundane coding systems, writing and debugging code for malware. And yes, “AI slop” is thick on the ground, sometimes literally. “Threat actors from China, Iran, Russia, and Saudi Arabia are producing political satire and propaganda to advance specific ideas across both digital platforms and physical media, such as printed posters,” says the Google report.
Google claims that it’s restricted access to Gemini for users that it can confidently identify as malicious, including the detected state-sponsored hacking teams.