OpenAI shares its contract language and 'red lines' in agreement with the Department of Defense
Samuel Boivin/NurPhoto via Getty Images
- OpenAI shared some contract language from its agreement with the Department of Defense.
- Its tech can't be used for mass domestic surveillance or autonomous weapons, OpenAI said.
- OpenAI also said that the government should not label its rival, Anthropic, a supply chain risk.
OpenAI says its agreement with the Department of Defense is "better" and has more safety guardrails than the one Anthropic was blacklisted for refusing to comply with.
In a blog post published Saturday, OpenAI shared some contract language from its agreement with the Department of Defense, including clauses that indicate its tech cannot be used for mass domestic surveillance or to power autonomous weapons or high-stakes decision systems like "social credit" scores.
"We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's," OpenAI's post read. "In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law."
OpenAI CEO Sam Altman took to social media shortly after the company's blog post was published, answering questions from users concerned about the nature of OpenAI's agreement with the government.
In Ask-Me-Anything-style responses, he doubled down on OpenAI's agreement being better than Anthropic's, not just for the broader AI landscape but also for the American people.
"Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with," Altman wrote in response to a question about why OpenAI agreed to partner with the government when its rival would not. "I think Anthropic may have wanted more operational control than we did."
OpenAI's agreement with the federal government comes on the heels of Anthropic being blacklisted and declared a supply chain risk after refusing to comply with the military's terms of use for the company's frontier model, Claude.
Anthropic, in a Friday statement, said that "no amount of intimidation or punishment from the Department of Defense will change our position on mass domestic surveillance or fully autonomous weapons" and vowed to "challenge any supply chain risk designation in court."
OpenAI, in its Saturday post, argued that Anthropic should not be designated as a supply chain risk and said it had made its position "clear to the government." Its agreement with the Pentagon stemmed, in part, from a desire to "de-escalate things between DoW and the US AI labs."
"A good future is going to require real and deep collaboration between the government and the AI labs," OpenAI's post reads. "As part of our deal here, we asked that the same terms be made available to all AI labs, and specifically that the government would try to resolve things with Anthropic; the current state is a very bad way to kick off this next phase of collaboration between the government and AI labs."
Representatives for OpenAI and Anthropic did not immediately respond to requests for comment from Business Insider. It was not immediately clear whether Anthropic, or any other leading AI company, had been offered similar contractual terms to those that OpenAI said it had agreed to.
OpenAI said that, as part of its deal with the Department of Defense, it will maintain "full control" over the safety stack it deploys, and robust "safety guardrails" to prevent misuse. Should the government violate the terms of the agreement, OpenAI said it "could" terminate the contract.
"We don't expect that to happen," OpenAI said in its post.
Altman, in his Ask Me Anything posts, wrote that OpenAI would not agree to allow the government to use its technology for mass domestic surveillance "because it violates the constitution."
He added that he is prepared for a potential dispute over the legality of specific governmental requests in the future, but added that if the Constitution were amended to make such surveillance legal, "Maybe I would quit my job."
"I very deeply believe in the democratic process, and that our elected leaders have the power, and that we all have to uphold the constitution," Altman wrote. "I am terrified of a world where AI companies act like they have more power than the government. I would also be terrified of a world where our government decided mass domestic surveillance was ok. I don't know how I'd come to work every day if that were the state of the country/Constitution."
The dispute between the government and the AI giants has sparked widespread criticism, with critics concerned about the ethical implications of the Department of Defense's use of AI and OpenAI's agreement to provide the government access to its technology.
OpenAI on Saturday said it believes AI will "introduce new risks in the world" and, by allowing the government use of its models, will give people defending national security "the best tools" to do so.
Business Insider previously reported that Anthropic's model, Claude, shot to the top of the app store on Saturday, and many people on social media, including celebrities like Katy Perry, have publicly posted about canceling their ChatGPT subscriptions in the wake of OpenAI's agreement with the government.