{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Anthropic’s forced removal from the U.S. government is threatening critical AI nuclear safety research 

The sudden wind-down of Anthropic technology within the U.S. government is raising concerns about whether federal officials, without access to Claude, might fall behind in the quest to guard against the threat of AI-generated or AI-assisted nuclear and chemical weapons. 

Though the rollout has been messy—and Claude remains in use in some parts of the government—the Trump administration’s anti-Anthropic posture could have a chilling effect on collaborations between AI companies and federal agencies, including partnerships focused on critical national security questions related to these kinds of futuristic threats, several sources tell Fast Company. The worry is that severing ties with the company could both limit government researchers’ understanding of how, in the future, bad actors could use AI to generate new types of nuclear and biological weapons, and hold back scientific progress more broadly.

Since at least February 2024, Anthropic has participated in a formal partnership with the National Nuclear Security Administration, the federal agency charged with monitoring the country’s nuclear stockpile. The point of that work, the company has previously said, is to “evaluate our AI models for potential nuclear and radiological risks.” The concern, here, is that developing nuclear weapons requires specialized knowledge, but that AI, as it continues to advance, could eventually become adept at developing this expertise on its own. Eventually, a large language model might be able to help someone figure out how to design an incredibly dangerous weapon—or even come up with a novel one itself.

Now, in the wake of President Donald Trump’s Truth Social post demanding that federal workers stop using Anthropic tech, it’s not clear what might happen to Anthropic’s efforts to guard against these future threats. Some federal agencies appear to still be weighing how to approach the Claude use cases they already have, while others are cutting off access to the tool entirely. 

“As directed by President Trump, the Department of Energy is reviewing all existing contracts and uses of Anthropic technology,” a spokesperson for the NNSA tells Fast Company. “The Department remains firmly committed to ensuring that the technology we employ serves the public interest, protects America’s energy and national security, and advances our mission.” Anthropic declined to comment. 

Safety concerns at the Energy Department

For the past few years, Anthropic has been collaborating with or providing technology to the myriad agencies and national labs that fall under the Department of Energy. For instance, the Lawrence Livermore National Laboratory began using Claude for Enterprise in 2025 and, at the time, made the tool available to about 10,000 scientists. The lab said last year that the technology was supposed to help accelerate its research efforts “in the domains of nuclear deterrence, energy security, materials science,” and other areas. 

Anthropic has also worked with the National Nuclear Security Agency on evaluating potential AI-related nuclear safety risks. For example, the agency has provided Anthropic with “high-level” metrics and guidance that have helped the company analyze the threat of its own technology. Anthropic has also worked with the NNSA on developing technology that can scan and categorize AI chatbot conversations and search for signs that someone might be using an LLM to discuss building a nuclear weapon.

An inventory for 2025 for the Department of Energy disclosed that the agency was using Claude at the Pacific Northwest National Laboratory, the Lawrence Livermore National Laboratory, and the Idaho National Laboratory in pilots. Anthropic is also one of several partners in the agency’s Genesis mission, which aims to accelerate scientific development by leveraging artificial intelligence.   

Those collaborations may now be in jeopardy. Claude is “everywhere” in the Energy Department’s labs, including at the NNSA, according to Ann Dunkin, the department’s former chief information officer. If labs, or the NNSA, “are working on projects using Anthropic as their AI tool, they are going to have to, at the very least, stop work and start with a new vendor,” Durking tells Fast Company. “This will cost time and money. More than likely, there will be [new] work as they will have to train a new model.” To conduct simulations that involve studying various AI risks, it’s important to understand how all AI models might behave, she adds. 

In regard to nuclear weapons, there’s worry that AI could be used to gather enough information to build one such weapon—or be jailbroken so that it could provide that information, Dunkin says. 

A former Department of Homeland Security official who focused on AI safety issues echoes those concerns. Anthropic, the person tells Fast Company, was a leader on evaluating how AI models, including its own, might create serious safety risks related to chemical and nuclear weapons. Pressure to remove Anthropic risks wasting peoples’ time and may not be successful anyway, they say. It also puts federal officials behind on trying to understand the full risks related to artificial intelligence, or to fully benefit from its efficiencies, given that Anthropic is still the leading provider of some AI capabilities. “There’s no ban on Claude for the bad guys,” the former official adds. 

Overall, the government’s sudden turn against Anthropic risks scaring off other companies that might want to work on serious issues, including those related to nuclear security.  “Anthropic learned that once you’re serving the U.S. government, you might not have the right to say no, at least now without retaliation. Naturally that will deter others from working for the government, especially on sensitive topics,” says Steven Adler, an ex-OpenAI employee who focuses on AI safety issues.

“There’s a bitter irony here: The administration is simultaneously demanding AI companies help with national security and making it harder for responsible actors to do exactly that,” Alex Bores, who is running for a House seat in New York on a platform focused on AI regulation, tells Fast Company in a statement. “AI companies working with NNSA to evaluate risk isn’t a liability—it’s a model. Punishing it sends exactly the wrong signal at exactly the wrong time.”

An incomplete exit plan

It’s not immediately clear how federal agencies are supposed to approach Anthropic technology right now. Trump used Truth Social to demand that federal agencies “immediately cease all use of Anthropic’s technology,” though such instructions are ordinarily communicated through the federal chief information officer. The Trump administration is reportedly working on an executive order related to Anthropic, while Anthropic has filed a lawsuit challenging its designation as a “supply chain risk.”

The General Services Administration, according to one post, seems to be interpreting the Truth Social post as a national security directive. The agency’s GitHub repository shows that Claude was recently removed for its interagency AI resource, and a person within the agency confirmed that employees could no longer access Claude internally. Still, another person at the agency tells Fast Company that no official instructions about how to actually enforce removing Claude from federal use cases have actually been sent to employees. 

One major challenge with stripping Anthropic’s technology from the federal government is that the technology can be delivered in many ways. In Claude’s case, this includes products sold by Anthropic directly, but also integrations with popular—and controversial—government contractors like Palantir and Amazon Web Services. 

Notably, Claude for Government is still listed as one of the features offered within the Palantir Federal Cloud Service, and several agencies have authorized the use of this package, including the Brookhaven National Lab and the Environmental Management Consolidated Business Center, as well as the State Department and the Treasury. The product describes Claude as “purpose built” to meet high government security requirements. Palantir also has a long-standing relationship with the NNSA that predates LLMs. 

The NNSA spokesperson declined to comment on how they were approaching the use of Claude and classified systems. At the time of this writing, Palantir had not respondd to a request for comment. 

On the military side of government, much ado has been made of the fact that only Caude, and not systems like ChatGPT, has been cleared to operate in classified environments. The Pentagon has since sent a memo to employees that prioritizes removing Claude from any systems that involve nuclear security. Classified environments are also important to civilian agencies. Though Treasury Secretary Scott Bessent has said his agency will be “terminating” use of Anthropic products and Claude, there was at least some grumbling at a recent meeting focused on AI use within the agency that other AI tools weren’t similarly available for classified information. 


Ria.city






Read also

Judge quashes grand jury subpoenas of Fed, Jerome Powell in renovations probe

Trump deploys 5,000 US Marines to Middle East amid spiralling oil prices

Have You Tried Talking To God About It? (You Should)

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости