Hybrid AI and Human Red Teams: Critical to Preventing Policies from Exploitation by Adversaries
Hybrid AI and Human Red Teams: Critical to Preventing Policies from Exploitation by Adversaries
Why such hybrid AI and human red teams are needed should now be clear.
The Red Cell series is published in collaboration with the Stimson Center. Drawing upon the legacy of the CIA’s Red Cell—established following the September 11 attacks to avoid similar analytic failures in the future—the project works to challenge assumptions, misperceptions, and groupthink with a view to encouraging alternative approaches to America’s foreign and national security policy challenges. For more information about the Stimson Center’s Red Cell Project, see here.
Red Cell
Despite the widespread belief among policymakers that geopolitical considerations alone are sufficient when developing technology policies and export controls, this view overlooks three crucial vulnerabilities that pose substantial risks to U.S. national security.
First, the accelerating pace of technological advancement has outpaced traditional geopolitical analysis frameworks. Just as the advent of steam engines created opportunities for new types of crime, like train robberies, today’s technology policies, intended to protect U.S. technologies from exploitation, can be weaponized by adversaries—creating vulnerabilities more severe than the original threats they aimed to address.
Second, rigorous analysis of adversaries’ capabilities in the formation of tech policies—a capability that was once a cornerstone of national security decisionmaking—has eroded. During the Cold War, the National Security Council (NSC) ultimately excelled at anticipating how adversaries might exploit U.S. tech policies, but recent decisions suggest this critical perspective is no longer a high priority—leaving the United States increasingly vulnerable to unintended consequences.
Third, modern AI systems offer an unprecedented opportunity to enhance analysis of adversaries’ capabilities. These systems can rapidly generate multiple scenarios for how adversaries might weaponize proposed technology controls—similar to how financial institutions combine AI fraud detection systems with human analysis to identify and prevent complex financial crimes before they cause considerable damage. This suggests that policy formation should incorporate AI-enabled “red teaming” alongside human expertise to identify potential exploitation before policies are implemented. Traditionally, red teams in U.S. national security have provided alternative analysis and devil’s advocacy by deliberately challenging consensus views and highlighting potential blind spots in intelligence assessments.
Why such hybrid AI and human red teams are needed should now be clear. Nations of concern have already found workarounds to well-intended and carefully constructed U.S. export controls or related technologies. Several prominent examples include:
- Iran’s counter-exploitation of dual-use technology controls (2015-2020): After U.S. export controls were imposed on dual-use technologies, Iran developed a sophisticated system of front companies and third-country intermediaries to circumvent these restrictions. Iran then used these same networks to help other nations facing similar restrictions, effectively creating a parallel market for controlled technologies. This network became so sophisticated that it began to undermine the effectiveness of U.S. export controls more broadly, as other nations learned to exploit the same pathways and mechanisms that Iran had developed.
- Russia’s manipulation of International Traffic in Arms Regulations (ITAR) (2014-2018): Following the implementation of U.S. ITAR restrictions on space technologies, Russia exploited these controls by positioning itself as an alternative supplier of rocket engines to other nations. Russia specifically used the strict nature of U.S. ITAR controls to market its RD-180 rocket engines to European and Asian countries, arguing that Russia’s technology had fewer restrictions and compliance burdens. This effectively reduced U.S. influence in the global space industry while increasing Russia’s market share at that time.
- China’s response to semiconductor controls (2022-2023): After the United States imposed strict controls on advanced semiconductor exports, China responded by weaponizing its dominance in rare earth minerals, which are crucial for semiconductor manufacturing. China restricted exports of gallium and germanium, citing “national security” concerns. Beijing effectively turned the tables: it used the same regulatory framework the U.S. had established to protect U.S. technologies to create supply chain disruptions for Western manufacturers. This demonstrated how export controls can be mirrored and repurposed against their originators.
Strengthening export controls through AI-human red team analysis
New approaches are needed to complement the Committee on Foreign Investment in the United States’ review process and other current criteria to determine how to best protect U.S. technologies, keep U.S. policy ahead of adversaries’ rapidly evolving capabilities, and employ AI to strengthen the development of U.S. tech policies. A more effective approach to evaluating tech policies and possible export controls would involve hybrid “red teaming,” a new policy using a combination of human and AI evaluations. Specifically, the new Trump administration could require that before any U.S. tech policy is issued, a combination of human and AI evaluation is performed to assess how state and nonstate adversaries could abuse any draft policy. Such an approach would:
- Create a standing technology policy red team combining human expertise with AI capabilities.
- Restore adversarial analysis in NSC deliberations linked to considerations of how proposed export controls could be weaponized against U.S. interests.
- Mandate AI-enhanced red team analysis before implementing any new tech policy.
In addition, the incorporation of AI-enhanced “red teaming” into U.S. technology policy formation would streamline the process. Instead of sequential reviews by different departments, AI-enhanced systems could help humans generate holistic analyses that consider multiple perspectives simultaneously. A hybrid human and AI team could simultaneously analyze multiple scenarios and provide comprehensive insights to all relevant stakeholders. The reduction in bureaucratic back-and-forth among agencies would be substantial.
For example: When evaluating export controls on quantum computing technologies, the AI system could simultaneously assess supply chain vulnerabilities, potential adversarial responses, and alternative sourcing options, providing a unified analysis that traditionally would require multiple rounds of interagency consultation. Human “red teamers” involved in the process could work to ensure that AI did not miss any new, important considerations given the amorphous nature of today’s world.
In addition, a focused, hybrid human and AI team could continuously monitor global supply chain dynamics, identifying emerging vulnerabilities before they become critical issues. For example, if an adversary began establishing front companies to circumvent export controls, the AI system could help humans detect unusual patterns in corporate registrations, international transactions, and shipping routes that might escape human attention. This proactive identification of supply chain risks would allow policymakers to address vulnerabilities before they could be exploited, rather than reacting to problems after they occur.
Implementing This Approach
A standing technology policy red team combining human expertise with AI capabilities would modernize U.S. technology policy evaluation and help the United States avoid exploitation of its new technologies. This hybrid approach would pair AI’s data processing and pattern recognition abilities with human strategic expertise to analyze potential policy impacts. For example, when evaluating semiconductor export controls, AI could rapidly assess global supply chains and simulate adversarial responses, while human experts could evaluate diplomatic implications. This integration with NSC deliberations would ensure comprehensive analysis of how proposed controls might be weaponized against U.S. interests.
The resulting mandatory AI-enhanced red team analysis would provide systematic protection against the potential unintended consequences of new technologies and export controls, including exploitation by adversaries. Before implementing controls on technologies like quantum computing, AI simulations could identify potential loopholes while human analysts assess real-world implications. This approach offers key benefits: better prediction of potential misuse of U.S. technologies, stronger protection of U.S. tech advantages, and more adaptive policymaking. The AI-human partnership would create a more sophisticated approach to technology policy development, ultimately strengthening U.S. national security while reducing the risk of policies backfiring.
The Trump administration has an opportunity to institutionalize this approach. By combining human expertise with AI-powered analysis, policymakers can identify potential vulnerabilities before policies are implemented rather than after they have been weaponized by adversaries. This approach would be particularly valuable for policies involving:
- Export controls on advanced semiconductors: The implementation of hybrid human and AI “red teaming” for semiconductor export controls would enable rapid assessment of global supply chains and potential circumvention tactics by adversaries. This was particularly evident in early December 2024, when China responded to U.S. semiconductor controls by weaponizing its dominance in rare earth minerals specifically restricting gallium and germanium exports. The combination of AI analysis and human expertise could have predicted and prepared countermeasures for such retaliatory actions, while also identifying alternative supply chain solutions before implementing the controls.
- AI development and deployment restrictions: A hybrid human and AI team could analyze multiple scenarios simultaneously to identify potential exploitation of AI development restrictions, while human experts evaluate the diplomatic and practical implications of these controls. The hybrid approach would help prevent repeat situations akin to when Iran used sophisticated networks of front companies to circumvent restrictions that happened when Tehran explored U.S. export controls on dual-use technologies in 2015-2020. This comprehensive analysis would enable policymakers to craft more robust AI deployment restrictions that anticipate and prevent exploitation methods before they emerge.
- Data localization requirements: A standing technology policy red team could continuously monitor and assess the effectiveness of data localization requirements across different jurisdictions and scenarios. The AI component could rapidly process global compliance patterns and identify potential vulnerabilities, while human analysts evaluate the geopolitical implications and practical feasibility of implementation. This approach would help prevent situations where adversaries could exploit gaps in data localization policies to gain unauthorized access to sensitive information.
- Technology transfer limitations: Drawing lessons from Russia’s manipulation of ITAR regulations, hybrid human and AI “red teaming” could identify potential alternative markets and suppliers that might emerge in response to technology transfer limitations. The system could simultaneously analyze supply chain vulnerabilities, potential adversarial responses, and alternative sourcing options, providing a unified analysis that traditionally would require multiple rounds of interagency consultation. Human “red teamers” would ensure that emerging concerns and changing global dynamics are factored into the analysis, creating more effective and adaptable technology transfer policies.
- Critical infrastructure protection measures: A hybrid human and AI team could help identify interconnected vulnerabilities in critical infrastructure systems while simulating various attack scenarios in which adversaries could exploit these weaknesses. The hybrid human-AI team could continuously monitor global threats to critical infrastructure, detecting unusual patterns in corporate registrations, international transactions, and potential infiltration attempts that might escape human attention alone. As noted, this proactive identification of risks would allow policymakers to implement protective measures to U.S. technology policies before vulnerabilities can be exploited, rather than reacting to problems after they occur.
Cumulatively, implementing such solutions might be as straightforward as requiring every proposed U.S. technology policy to undergo a combined human-AI adversarial analysis focused on potential exploitation by state and nonstate actors. This would improve the need for “band-aid” solutions that could create more significant long-term vulnerabilities than the problems they attempt to solve.
In sum, the Trump administration has an opportunity to create more effective technology policies and export controls while streamlining their development through a combination of human and AI “red teaming.” Given past misuses and abuses of well-intended U.S. export controls, the time is ripe for such improvements that will transform how the United States anticipates and responds to potential threats.
About the author: David Bray
David Bray is Chair of the Accelerator at the Alfred Lee Loomis Council and a Distinguished Fellow at the Stimson Center.
Image: humphery / Shutterstock.com
The post Hybrid AI and Human Red Teams: Critical to Preventing Policies from Exploitation by Adversaries appeared first on The National Interest.