Google’s New Pentagon AI Deal Sparks Concern Over ‘Irreparable Damage’
Google’s uneasy truce with the Pentagon is getting harder to maintain.
The tech giant has signed a deal granting the US Department of Defense access to its Gemini AI models for classified military work, multiple sources confirmed Tuesday. The agreement, first reported by The Information, allows the Pentagon to deploy Google’s AI for “any lawful government purpose,” a phrase that has become a flashpoint across Silicon Valley’s increasingly uneasy relationship with Washington.
The classified access is an amendment to a $200 million contract Google had already signed with the Pentagon, under which Gemini was being used on unclassified government systems, including through the Pentagon’s GenAI.mil platform. The new layer extends that access deeper, into classified networks used for sensitive operations, including mission planning and weapons targeting.
“We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security,” spokesperson Jenn Crider said in a statement. “We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.”
The Pentagon’s AI chief, Cameron Stanley, confirmed the expanded use of Gemini to CNBC, framing it in practical terms. “There’s a lot of different things that are saving thousands of man hours, literally thousands of man hours on a weekly basis,” he said.
Stanley also made clear this isn’t about locking in one provider. “Overreliance on one vendor is never a good thing,” he told CNBC. “We’re seeing that, especially in software.”
For Google, the deal is another step into the lucrative defense AI market. For critics inside and outside the company, it revives the same question that fueled the Project Maven revolt: where does “lawful government use” end, and where do Silicon Valley’s ethical lines begin?
The fine print and what it leaves out
The deal’s language is carefully worded, but notably tilted toward the government’s position.
According to The Information’s reporting, the contract states that “the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.”
But the same document makes clear that Google does not hold any right to “control or veto lawful government operational decision-making,” a distinction that critics say renders those guardrails largely symbolic.
The agreement also apparently requires Google to help adjust its AI safety settings and filters at the government’s request, according to The Information, a clause that will likely draw further scrutiny from civil liberties advocates and Google’s own workforce.
A rebellion within the ranks
Google has defended the agreement, but some employees say the company is moving too quickly into classified military work. More than 600 employees from Google’s AI and Cloud divisions, including many from the elite DeepMind lab, signed a letter to CEO Sundar Pichai urging him to spike the deal.
The employees expressed deep concern that once the AI enters a classified environment, the company will lose the ability to monitor how it is actually used. In the letter, reported by The Washington Post, workers wrote: “We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways.”
They warned that making the “wrong call” could cause “irreparable damage” to Google’s reputation. This internal tension echoes 2018, when Google was forced to walk away from a project called “Project Maven” after a similar staff uprising.
The agreement comes amid a broader dispute between the Pentagon and AI startup Anthropic, which was barred from Defense Department work after refusing to relax safeguards on surveillance and autonomous weapons. That standoff has spilled into the courts, underscoring a growing divide between government demands and tech companies’ attempts to set ethical boundaries.
At the same time, the Pentagon is seeking billions in funding to expand AI-driven systems, signaling that AI will play an increasingly central role in military strategy.
The post Google’s New Pentagon AI Deal Sparks Concern Over ‘Irreparable Damage’ appeared first on eWEEK.