Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Harnessing AI to Move from Threat-to-Threat Reduction

AI is lowering barriers to CBRN threats—but with coordinated public-private partnerships, the same technologies can become tools for threat reduction.

Editor’s Note: The Red Cell series is published in collaboration with the Stimson Center. Drawing upon the legacy of the CIA’s Red Cell—established following the September 11 attacks to avoid similar analytic failures in the future—the project works to challenge assumptions, misperceptions, and groupthink with a view to encouraging alternative approaches to America’s foreign and national security policy challenges. For more information about the Stimson Center’s Red Cell Project, see here.

Red Cell

Artificial intelligence (AI) is redefining the boundaries of global security. As advanced AI models approach expert-level capability, there is a growing recognition that they could be misused to design, develop, and deploy chemical, biological, radiological, and nuclear (CBRN) weapons. Yet the AI sector has also become a new, unexpected but essential partner in mitigating these risks from weapons of mass destruction (WMD), entering a space that has historically been dominated by governments, national laboratories, and nongovernmental and international organizations. 

Together, these actors can gain not only a better understanding of AI-related risks but also explore how to harness AI capabilities to enhance the international community’s capacity to prevent, detect, and respond to CBRN threats. While the field of AI mitigation for CBRN threats is in its early stages, with the right public-private partnerships, AI could be harnessed to become a powerful tool for modern threat reduction. 

AI and CBRN Risk Landscape

Advanced AI models operating at expert-level capability reduce technical, financial, and logistical barriers that historically have limited the pool and ability of malicious actors to pursue CBRN weapons. By potentially reducing the expertise, time, materials, and coordination required for planning or supporting illicit CBRN activities, advanced AI models could broaden the pool of threat actors, shorten timelines for response, and increase uncertainty about the adaptability of longstanding technical and logistical barriers. While chemical, biological, and nuclear weapons existed before or at the dawn of the computer era, advanced AI systems can expand their accessibility, availability, and lethality, and reduce accountability and detectability. 

Large language models (LLMs), for example, can unlock specialized datasets at scale, which can also introduce new pathways for weapons development and acquisition. In one experiment, well-meaning researchers repurposed MegaSyn, a generative AI tool for pharmaceutical research and development (R&D), and the system identified 40,000 toxic molecules, including nerve agents, in under six hours. These models are also reducing the time and coordination needed for planning attacks. “LLMs,” as noted by a recent unpublished Europol report on AI and CBRN threats, “increased the ability of non-experts to brainstorm, refine and optimise attack scenarios.”

LLMs are also at the center of a rapidly expanding synthetic media ecosystem that is capable of deception and manipulation, creating highly convincing fake text, audio, video, and biometric data that could be exploited to falsify nuclear safeguards data, counterfeiting supply chains, testing data, or certification documents. The complexity and opacity of AI systems also make their influence hard to trace and their manipulation difficult to detect, creating opportunities for malicious actors to disrupt CBRN supply chains or infrastructure through AI-enhanced cyberattacks, such as data poisoning, inserting unauthorized software, or altering training datasets to skew outputs.

Generative AI is also giving rise to a “Fraud-as-a-Service” industry that is democratizing sophisticated deception, offering platforms with instructions and ready-made tools that dramatically amplify the risks of tampering, data poisoning, and manipulation campaigns. This shift from organized groups to amateurs is dramatically increasing both the scale and sophistication of fraud attacks, posing a growing threat to CBRN sectors reliant on digital identity verification and Know Your Customer (KYC) compliance.

The cases above highlight how the same AI systems that drive scientific innovation can, when misused or insufficiently governed, lower the barriers to accessing CBRN materials and know-how at unprecedented speed and scale. With less than three percent of AI research directed towards AI safety, a lack of funding remains the biggest challenge to mitigating AI-related CBRN threats, leaving both governments and developers without sufficient mechanisms to anticipate or prevent misuse. 

The AI Industry and Frontier Mitigations

Big Tech firms have recently begun building frameworks to mitigate the misuse of their most capable systems, so-called “frontier” models, which are capable of advanced reasoning and problem-solving. These frameworks, such as OpenAI’s Preparedness Framework, Microsoft’s Frontier Governance Framework, and Anthropic’s Responsible Scaling Policy, are used to assess and identify thresholds of risk and trigger internal mechanisms before models are released, with CBRN-related misuse ranked as a top concern. Many companies are also running red-teaming exercises and hackathons focused on CBRN risk scenarios, stress-testing their models to identify vulnerabilities and strengthen mitigation measures. These frontier model frameworks represent the first serious attempt by the private sector to align AI development with global CBRN threat-reduction objectives.

In recognition that frontier AI and CBRN security represent a shared challenge, Anthropic, Microsoft, and OpenAI, along with Amazon, Google, and Meta, launched the Frontier Model Forum (FMF) in July 2023. Established as an industry non-profit, the FMF coordinates research to develop a collective understanding of AI-threat models and mitigation strategies for CBRN-related misuse and advanced cyber threats. With plans to expand its membership in the future, the FMF’s impact could further grow by including AI developers outside the United States and a broad mix of academic experts and public sector partners. Sharing research on frontier mitigations more widely and reflecting the range of approaches to CBRN risk reduction globally will support a more unified effort to align innovation with international security. 

At the same time, there is a parallel, limited but growing body of research studying the reliability of current LLM mitigations for CBRN-related threats. Europol’s red-team exercise in July 2025 revealed significant weaknesses, noting the ease with which non-experts bypassed LLM guardrails with simple prompt engineering techniques while increasing their ability to brainstorm credible ideas for attack planning.  Similarly, a recent comparative study of 10 leading commercial LLMs found an 87 percentage-point gap in performance between the most secure (Claude-Opus-4) and least secure (Mistral-Small-Latest) models, indicating that while effective mitigation measures are already technically achievable, their implementation varies widely across the AI industry. Some of the companies have since released updated models, but this early research underscores the urgency for collaboration by Big Tech and AI developers with the wider threat reduction community, particularly in leveraging AI as a tool for strengthening CBRN threat reduction efforts.

Leveraging AI as a Tool for Threat Reduction

AI’s capacity to analyze complex data streams in real time can improve detection and early-warning systems. For the international law enforcement community, Interpol’s BioTracker uses machine learning with expert insight to track global infectious disease threats, improving investigatory and outbreak response capabilities. Similar AI-powered surveillance tools could assist other organizations, such as the World Organization for Animal Health (WOAH), the Food and Agriculture Organization (FAO), the World Health Organization (WHO), and national health authorities to detect and respond to irregularities in human, animal, and agricultural data before they escalate into crises. AI-driven detection could also aid in fighting the illegal trafficking of CBRN materials by spotting irregular movements in trade, transport, and customs data. 

AI is being used by the International Atomic Energy Agency (IAEA) in support of its verification mandate, specifically AI models that are helping to organize open-source information, identify changes in satellite images, and review surveillance footage to monitor movements of spent nuclear fuel. These tools allow inspectors to focus on interpreting and resolving ambiguities, allowing the Agency to investigate and respond more swiftly to signals of noncompliance.  Successful use cases such as these could also help modernize the Biological and Toxin Weapons Convention (BTWC), which, unlike its nuclear and chemical counterparts, still lacks a formal verification regime. With a new BTWC working group launched in 2022 to explore compliance options for the first time in two decades, AI-enabled monitoring, analysis, and data-sharing tools could form the backbone of next-generation verification and confidence-building measures. 

In the chemical field, the US Army’s Aerosol Vapor and Chemical Agent Detector (AVCAD) utilizes AI to autonomously identify chemical agents through mass spectrometry, thereby enhancing readiness and defense against exposure in the field. AI models trained on spectroscopy data can also support chemical forensics, helping determine the origin or modifications of chemical substances. These AI-enhanced tools benefit both national authorities and international inspectors who are dispatched to investigate allegations of chemical or biological weapons attacks. 

LLMs can offer practical benefits for cost savings at multilateral institutions, from faster, cheaper translation to greater linguistic accessibility that could help reduce large translation costs. Early adopters such as the Internet Corporation for Assigned Names and Numbers (ICANN), which is using AI-driven translation tools to expand beyond its six official languages, could help inform other organizations in leveraging LLMs for translation, as well as technologies that can enhance the experience (such as Translator Glasses).

AI could also be used to enhance emergency preparedness at organizations operating in highly volatile environments by transforming static disaster response plans into dynamic, real-time decision-support tools. Training simulations can become interactive dashboards during real incidents. Routing models can also optimize evacuation plans and adjust to changing supply, while predictive models can guide how first responders deploy protective gear or decontamination assets. When paired with cross-sector cooperation, AI has the potential to become a powerful enabler of modern CBRN threat reduction.

The Expanding Threat Reduction Ecosystem

In the age of advanced AI systems, CBRN threat reduction requires cooperation across sectors that often speak different technical and policy languages. In nuclear nonproliferation, the term “safeguards,” for example, refers to the IAEA’s technical measures to verify that a country’s nuclear activities remain for peaceful purposes. In the AI world, “safeguards” refers to software guardrails designed to prevent harmful model outputs, and in standards bodies, such as the US National Institute of Standards and Technology (NIST), the term is more closely associated with security controls and countermeasures. Even basic terms like “threat” versus “risk” carry different meanings for scientists, policymakers, international inspectors, and engineers. Accordingly, building shared terminology is foundational for AI to be used for robust CBRN threat reduction. 

The International Network of AI Safety Institutes could also help connect technical and policy communities. Launched at the AI Safety Summit in Seoul, South Korea, in May 2024, these state-backed centers primarily focus on coordinating research efforts, sharing evaluation methodologies, and developing common approaches to assessing AI risk. Although many of these institutes have narrowed their broader mandates on AI safety and social impacts to focus on security priorities, that shift may create space for practical progress on security, with CBRN threat reduction emerging as one of the few areas where multilateral action on AI may be possible.

At the forefront of threat reduction is the G7-led Global Partnership Against the Spread of Weapons and Materials of Mass Destruction (GP)—the world’s largest and longest-standing WMD threat-reduction mechanism. With 31 members, the GP provides funding pathways, implementing partners, and established relationships with international organizations and civil society. To date, the GP has delivered more than $26 billion in programming, supporting, inter alia, chemical weapons destruction, nuclear and radiological security, biosafety and biosecurity, export controls, and capacity-building projects globally. At the GP’s plenary meeting in November 2025, members endorsed a new threat reduction strategy that aims to counter and/or leverage WMD-Relevant Technologies by identifying and delivering programmatic responses that can either support WMD non-proliferation and counterterrorism efforts or that present threats that fall within the GP’s remit and can be countered.

As frontier model developers continue to develop mitigations, aligning their work with the GP is essential. The GP provides legitimacy, field-level insights, and mechanisms to operationalize mitigation strategies. For the GP, the AI sector contributes technical expertise and agility to develop tools quickly. Together with scientists, engineers, and cybersecurity experts, this expanded network forms the foundation for next-generation CBRN threat reduction.

A Turning Point for CBRN Threat Reduction

AI is reshaping both the CBRN threat landscape and the community capable of countering it, bringing the technology sector into a space long dominated by governments and multilateral bodies. When AI companies, academia, governments, and international organizations work together to share research on frontier mitigations, align innovation with global security needs, and integrate technology developers into mechanisms such as the Global Partnership, they can better anticipate and reduce AI-enabled risks. Done right, this cooperation can harness AI to strengthen every layer of CBRN threat reduction, from detection and attribution to verification, cyber defense, and field-level response. Backed by sustained investment and collaborative partnerships, AI will not accelerate CBRN threats. It will help stop them.

About the Author: Cindy Vestergaard

Cindy Vestergaard is the project lead for the Converging Technologies and Global Security program at the Stimson Center. She previously directed the Nuclear Safeguards program (2016–2022) and the Blockchain in Practice program (2018–2022) and was a nonresident fellow for both programs in 2023 when she served as vice president of Special Projects and External Relations at DataTrails. Before Stimson, Dr. Vestergaard was a senior researcher at the Danish Institute for International Studies (DIIS) in Copenhagen, Denmark. Prior to DIIS, she worked on nonproliferation, arms control, and disarmament policy and programming at Canada’s foreign ministry. She has been an external lecturer at the University of Copenhagen, a regular contributor to media outlets, and presents nationally and internationally on global security.

Image: ART STOCK CREATIVE/shutterstock

The post Harnessing AI to Move from Threat-to-Threat Reduction appeared first on The National Interest.

Ria.city






Read also

Amazon digs deep for AI gold as copper mine partnership fuels massive data center expansion

The Samsung Odyssey G5 ultra-wide gaming monitor just dropped to under $300 at Amazon

Commanders unveil initial renderings for new stadium in DC

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости