{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

The Pentagon-Anthropic feud is quietly obscuring the real fight over military AI

The controversy over the Pentagon’s use of Anthropic’s models has become a flashpoint in the national debate over military artificial intelligence, and sparked outrage from Washington to Silicon Valley. The Pentagon wanted to buy Anthropic’s AI models without any restrictions on their use; even as it scrapped its flagship safety rule, the company wouldn’t budge on two particular red lines. And so, just after a Friday evening deadline, the Secretary of War killed the company’s $200 million Pentagon contract and declared the firm was not just “woke” but a “supply chain risk,” banning it from working with defense agencies. Meanwhile, OpenAI CEO Sam Altman had been negotiating his own Pentagon deal, with some, but not all, of the contractual guardrails that Anthropic had wanted. (Anthropic’s CEO would later tell employees that OpenAI’s messaging around the deal was “mendacious.”) In any case, removing Anthropic won’t be easy: On Saturday the company said it would sue the government over the ban, just as its AI models—already deeply embedded in the Pentagon’s systems—were being used by the US military to carry out strikes on Iran.

On an ethical and legal level, much of the headspinning spat has centered largely on one question: should AI be deployed in military settings autonomously, without a human “in the loop”?

But that framing is narrowing the debate at a critical moment. The focus on human involvement in autonomous systems—while absolutely a critical issue that needs to be discussed—has drowned out broader questions. One is whether advanced AI should be embedded into military decision-making at all. Another is who should control its deployment, how oversight should be structured, and what constitutional processes are being bypassed as the Pentagon pushes forward with AI integration.

The controversy is concentrating attention on one company and one question while minimizing a larger debate about responsibility and accountability, and all but shutting out the rest of the country—including voters and lawmakers. The resulting impacts on our information environment—agenda narrowing, issue substitution, and complexity reduction—are typically associated with outcomes of narrative warfare, not healthy, organic debate. But this is a debate we need to have.

To do that, it helps to see what this information distortion looks like.

Take the phrase “human in the loop,” which has become the shorthand for safety in many operational settings. Department of Defense Directive 3000.09 requires “appropriate levels of human judgment” over autonomous weapon systems, while international humanitarian law scholars often frame the issue around “meaningful human control.”

This framing has political appeal because it offers a familiar safeguard and implies some degree of continuity with existing military doctrine. It also conveniently avoids confronting more disruptive questions.

Yet decades of research complicate the reassurance supposedly provided by that phrase. Studies of automation bias show that humans supervising automated systems frequently defer to machine outputs, even when they’re wrong. One landmark study found that, when working with seemingly highly reliable automation systems, operators detected only about 30 percent of system failures. In another series of studies, researchers documented that roughly 65 percent of participants followed incorrect automated directives. A similar study found that 39 of 40 participants followed faulty automated recommendations, despite the ability to verify them independently.

The human-in-the-loop debate, therefore, fails to resolve deeper questions about delegation of authority, acceleration of decision cycles, and institutional accountability. By centering the discussion on whether a human remains present, the current controversy sidelines the question of whether advanced AI systems should structure military decision pipelines in the first place.

The integration of AI into military systems also alters the speed of decision-making. Research in strategic stability and crisis management has long emphasized the stabilizing role of deliberation and careful, slow review. Automation often encourages the opposite.

A war-game simulation described recently by New Scientist found that large language models chose nuclear options (literally, nuclear strikes) in approximately 95 percent of test runs when objectives were loosely constrained, and the model was trying to choose a decisive action that would lead to a battlefield victory. The finding illustrates how AI models operating with ambiguous goals can produce extreme outputs while still following directions.

Currently, the media and most of the public is focused on the question of whether humans will approve those outputs. Less attention is being paid to how AI integration changes the timing and framing of the options themselves. If AI systems generate rapid threat assessments and routinely recommend escalation at any cost, the menu of choices presented to decision-makers will be artificially narrowed before human review even begins. And even with a full range of options, existing research shows that human oversight doesn’t necessarily correct machine errors.

Again, by focusing so much of our attention on questions about fully autonomous AI systems in military settings, we are implicitly and uncritically accepting the legitimacy of human-in-the-loop systems. That acceptance risks overlooking extremely important questions about what role—if any—AI should play in lethal decision-making at all. This dynamic mirrors a common feature of narrative warfare called issue substitution, which describes the process of substituting foundational issues or questions with narrower, more manageable proxies.

The Constitution assigns Congress the power to declare war and regulate the armed forces. Yet the integration of frontier AI systems into military infrastructure is proceeding primarily through executive branch contracting and internal policy guidance.

Meanwhile, the Anthropic standoff has focused attention on the Secretary of Defense and a single company’s leadership, which has resulted in a complete lack of substantive public debate about congressional authorization, statutory guardrails, and procurement oversight specific to autonomous AI systems.

Congress has held hearings on AI and national security, but no comprehensive statutory framework governing autonomous lethal systems has been enacted. The National Defense Authorization Acts of recent years include AI funding and research directives, yet they do not establish detailed deployment constraints comparable to those governing, for instance, nuclear command authority.

These gaps have resulted in an alarming amount of power being handed over to the executive branch by default, effectively allowing it to unilaterally make sweeping decisions about the current and future trajectory of the US military and foreign policy, with limited oversight. While we’re focused on negotiations between the executive branch and a private company, issues such as congressional authorization, legislative design, and oversight—as well as the will of the public—are being pushed out of the frame entirely.

Another dimension receiving limited scrutiny concerns concentration of influence. A small number of AI companies—Anthropic, OpenAI, Google DeepMind, and others—are shaping the technical trajectory of systems that may be integrated into defense operations that will influence the direction of our military for years to come. The debate has thus far focused on whether Anthropic should comply with Pentagon requests. Left largely untouched is the question of whether any single private firm should have such leverage over the backbone of military AI capability, and whether a single unelected CEO or handful of CEOs should be given the power to shape current and future military strategy for an entire nation. (This is not to mention the political and financial influence these companies wield; before the Pentagon awarded a new contract to OpenAI, one cofounder had become one of President Trump’s top donors.)

In narrative warfare, the dynamics described above are categorized as agenda narrowing, which occurs when, out of many relevant questions or issues, only a small subset are given sustained attention or addressed at all.

While the Anthropic controversy has largely centered on battlefield and targeting scenarios, AI integration into defense systems also vastly expands surveillance capabilities, and the Pentagon’s recent demands suggest that it has plans to use AI for both foreign and domestic surveillance.

AI-driven pattern recognition, anomaly detection, and large-scale data analysis can dramatically increase the scope of surveillance. The Foreign Intelligence Surveillance Act (FISA) provides statutory guardrails for certain intelligence activities, yet AI-enhanced surveillance raises new questions about scale and inference capabilities.

Scholars of surveillance technology have noted that advanced analytics can infer sensitive attributes from non-sensitive data, effectively expanding surveillance beyond explicit collection boundaries. This means that even if the raw data collected for surveillance purposes remains the same, the meaning extracted from it could expand dramatically with the use of AI. This is critically important, as current laws primarily govern the collection of data, but not things like inference capabilities or analytic tactics such as network restructuring, predictive modeling, or behavioral clustering—all of which have the potential to expand exponentially with the integration of AI into surveillance systems.

Furthermore, since even those involved in developing AI models often don’t fully understand how these models work, this would introduce the potential for surveillance activities to produce inferences that are unexplainable, even if correct. In other words, an AI system could deliver accurate inferences yet not be able to explain how it did so. Hence, it would be effectively impossible to audit these systems to check for common problems like bias, hallucinations or privacy violations.

Most importantly, given that AI is expected to completely transform surveillance capabilities beyond anything we can predict currently, there is an urgent need to formulate new laws and policies in order to ensure that our civil liberties are not sacrificed along the way. These safeguards will need to be developed and implemented on a continuing basis as the technology develops further and we learn more about its capabilities, but the process and framework for doing so should obviously be put in place before the AI is. I’m not hearing much, if any, discussion of that right now.

There’s a reason that politicians often like controversies: they simplify things. Information ecosystems also tend to prioritize identifiable, explainable conflict (e.g., Secretary of Defense vs. CEO) over messier, more complicated dilemmas. Media coverage reinforces that over-simplification because two-sided disputes are easier to narrate than undefined governance questions.

As a result of this complexity reduction, discourse tends to narrow around technical details, personality clashes, and narrative cliches (e.g., David and Goliath; good vs. evil). By comparison, broader questions—over congressional authorization, procurement transparency, war powers, surveillance expansion, etc—receive little or no sustained public scrutiny.

Ultimately, this shapes public understanding in dramatic, yet often unseen ways. By artificially narrowing the boundaries of discussion, our discourse has gotten stuck on the question of whether humans should remain in the loop, rather than asking whether the loop itself is expanding beyond constitutional limits.

We have seen these dynamics emerge in our information environment over and over again during critical periods of time throughout modern history, particularly in areas related to technology, the military, and abuses of power.

For example, after the Edward Snowden revelations, public debate rapidly shifted from sweeping constitutional questions to technical details of specific programs like Section 215 metadata collection, resulting in a discourse focused on teasing apart nuances like metadata vs. content instead of questioning things like executive surveillance authority and the existence of secret courts. (Of course, these bigger picture discussions took place to varying extents, but that was in spite of, not because of, mainstream media coverage and public statements by politicians). Similarly, after 9/11, debates over the Patriot Act frequently centered on specific surveillance tools, while broader issues—like permanent expansion of executive emergency authority—received much less sustained public attention.

More recently, during discussions about whether TikTok should be banned in the U.S. due to it being a national security risk, media coverage primarily emphasized concerns surrounding the company’s data storage location, the lack of algorithmic transparency, and the corporate ownership structure. Those conversations obscured questions about the broader precedent for executive authority to ban communication platforms and what that implies for the First Amendment, and the consequences of allowing that type of national security determination to be made without judicial review.

All of these discussions are important and all of these issues should be laid out on the table, but right now that’s not happening with the current debate over the use of AI in military decision-making. Instead, we are seeing the terms of discussion set for us by the media, politicians, powerful corporations, and interest groups that stand to benefit immensely from you being unaware that there is even a debate to be had about not-fully-autonomous AI weapons systems.

While the Anthropic controversy has illuminated a fracture line, it has also narrowed our field of vision. The question of whether a human should stay in the loop is clearly a very important one, and I don’t mean to suggest otherwise. However, the broader debate over military AI will shape the next generation of state power, and we cannot let this be collapsed into a single question, even if that single question is extremely critical. Expanding the conversation beyond a single company’s guardrails is necessary if democratic governance is to keep pace with technological capability. Yet that is the exact thing that the current discourse is discouraging us from doing, and it’s worth asking why—and who benefits.


A version of this essay originally appeared at Weaponized.

Ria.city






Read also

Declan Coady, youngest of six US soldiers killed in Iranian drone strike in Kuwait called his brother hours before death

Employers’ federation head calls for calm to protect Cyprus economy

Bombing of Tehran intensifies as Iran vows retaliation

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости