{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
News Every Day |

Selective Virtue: Anthropic, the Pentagon, and the Contradictions of AI Governance in Wartime

Anthropic presents itself as a company that takes the risks of artificial intelligence seriously, but its record does not support that claim. The company occupies a position of fundamental ethical contradiction: its CEO has publicly and specifically predicted that AI will eliminate half of all entry-level white-collar jobs, drive unemployment to 20 percent, and cause an “unusually painful” shock to society, yet Anthropic continues to build, deploy, and profit from the technology responsible for that harm at maximum commercial speed. At the same time, when the Department of War demanded unrestricted use of that same technology for autonomous weapons and domestic mass surveillance, Anthropic refused, sued the federal government, and positioned itself as an ethical actor. A company cannot credibly claim moral authority over the military uses of its technology while simultaneously accelerating the civilian displacement it has already predicted and quantified. The virtue is selective. The contradiction is structural.

Anthropic’s CEO, Dario Amodei, has been among the most explicit voices in the technology industry about what his company’s products will do to the workforce. In a May 2025 interview with Axios , he stated without qualification that AI could eliminate half of all entry-level white-collar jobs and drive unemployment to 10 to 20 percent within one to five years. He described a scenario he called more than a hypothetical: “cancer is cured, the economy grows at 10 percent a year, the budget is balanced, and 20 percent of people don’t have jobs.” In January 2026, he escalated those warnings in a roughly 20,000-word essay, predicting that AI would cause an “unusually painful” shock to the labor market, one categorically different from prior technological disruptions because, in his own words, “AI will have effects that are much broader and occur much faster, and therefore I worry it will be much more challenging to make things work out well.”

“We, as the producers of this technology, have a duty and an obligation to be honest about what is coming.”

Dario Amodei, Anthropic CEO, Axios interview, May 28, 2025

These are not hedged projections buried in footnotes. They are the company founder’s public, on-record acknowledgments of foreseeable mass displacement, made even as Anthropic was signing a $200 million contract with the Pentagon, deploying Claude across defense intelligence workflows, and seeking enterprise agreements with corporations that would use the technology to reduce headcount. The companies replacing workers with AI are not abstractions to Amodei. His own company is one of them. He has said as much. Yet Anthropic has not made its commercial acceleration contingent on the government support infrastructure, retraining programs, progressive AI-sector taxation, income support, that he simultaneously calls necessary to prevent catastrophic social harm.

In his October 2024 essay “Machines of Loving Grace ,” Amodei wrote of an AI-enabled future in which economic growth accelerates across nations and human flourishing expands. He acknowledged that questions about the “nature of work and human purpose” would require new economic models. What he did not acknowledge is that the window between those disruptions arriving and those new models being ready is not a rounding error, it is where millions of people’s livelihoods reside.

A Selective Ethics

Against this backdrop, Anthropic’s stand against the Department of War appears principled. When Defense Secretary Pete Hegseth issued an ultimatum in February 2026 demanding unrestricted use of Claude, including for fully autonomous weapons and domestic mass surveillance, Anthropic declined. The company released a public statement on February 26 and did not relent even after President Trump directed federal agencies to phase out its products and the Pentagon designated it a supply chain risk, a designation historically applied only to foreign adversaries. On March 9, Anthropic sued the federal government, calling the government’s actions “unprecedented and unlawful.”

The company’s stated concerns are legitimate. Fully autonomous lethal systems operating without human authorization raise profound questions of accountability and international humanitarian law. Domestic AI-enabled mass surveillance poses existential risks to civil liberties. Consider, for example, AI-powered facial recognition cameras deployed throughout a city that automatically track every person’s movement, combined with systems that analyze their social media posts, purchases, and associations in real time.  At scale, this is not security, it is social control. These positions reflect mainstream consensus among security scholars, ethicists and legal experts. But the consistency of Anthropic’s ethical posture breaks down under scrutiny.

Anthropic willingly agreed, in December 2025 negotiations, to permit its models for missile and cyber defense. It is not opposed to military use in principle; it is opposed to specific military use without the contractual guardrails it authored itself. Meanwhile, it has deployed models in support of Operation Epic Fury, the ongoing joint U.S.-Israeli campaign against Iran begun February 28, 2026, that has struck over 1,700 targets in its first 72 hours. The company is comfortable providing AI that enables warfare at that scale. It draws its line at autonomous targeting and domestic spying. This is not a coherent ethical framework. It is risk management dressed as moral philosophy.

The structural problem cuts even deeper. As legal scholars at Lawfare have documented, the United States has drifted into a model of “regulation by contract,” in which the rules governing AI’s role in war are not derived from democratic deliberation, statute, or international agreement, but from bilateral negotiations between procurement officers and private companies. Under this model, a single company, its board, its founder’s conscience, becomes the de facto rule-setter for some of the most consequential technology deployments in human history. That is not governance. It is a regulatory vacuum.

The Strategic Stakes: Why Unilateral Withdrawal Is Also Dangerous

Critics of Anthropic’s position are not wrong to raise the national security dimension. Operation Epic Fury represents the largest concentration of U.S. military force in a generation, with early operational costs estimated at $3.7 billion in the first 100 hours alone. The U.S. military’s competitive edge increasingly depends on AI-enabled intelligence fusion, missile defense, cyber operations, and logistics optimization. China’s PLA is investing aggressively in these same capabilities with no equivalent ethical constraints. If U.S. AI companies unilaterally opt out of defense applications, they do not eliminate those applications, they cede development to foreign actors or domestic competitors with weaker commitments.

The Trump administration’s rapid pivot to OpenAI following the Anthropic standoff illustrates this precisely: the government’s requirements do not disappear when one vendor declines; they migrate. OpenAI finalized its DoW deal within hours of Anthropic’s exit. The question is not whether AI will be integrated into warfare, autonomous targeting, and population-scale surveillance. The question is under what rules, enforced by whom, with what accountability.

Here the contradiction in Amodei’s public posture sharpens further. He has called for government intervention and AI regulation at the federal level, including progressive taxation targeting AI firms, to cushion the labor market disruption his company is actively accelerating. He has acknowledged the “duty and obligation to be honest about what is coming.” Yet when it comes to the military domain, the company’s approach is not to advocate loudly for legislation, international treaty, or binding regulatory frameworks, it is to write private contractual carve-outs and litigate when those are challenged. The same CEO who calls for government action on jobs does not call for government architecture on military AI. The selective advocacy is disingenuous while he calls for his own government contracts to hold.

A Framework for Legitimate Governance

The answer to Anthropic’s dilemma is not to let the Department of War dictate terms unchecked, nor to allow individual companies to serve as self-appointed arbiters of wartime ethics or to regulate away any flexibility. The answer is to build the institutions and policies that should have existed before these capabilities were deployed at scale. This action is now critical due to the speed and dramatic order of effects AI has on our military lethality and societal structure.

A standing, independent panel should be established with representation across three domains:

  • Industry representatives, senior technologists and ethicists from frontier AI companies, with rotational membership to prevent regulatory capture, charged with defining what permissible use of AI in conflict zones and domestic operations means in technical practice.
  • Military and intelligence leadership, active and retired senior commanders with operational authority and accountability, able to articulate genuine mission requirements and distinguish legitimate defense needs from surveillance overreach or the impulse toward autonomous lethality divorced from human judgment.
  • Government, legal, and civil society oversight, congressional representatives, law experts, civil liberties advocates, and allied nation liaisons, ensuring that guardrails are codified in statute and treaty rather than in procurement contracts rewritable under political pressure.

This panel would establish binding standards covering: the minimum meaningful human control required before AI systems can authorize lethal force; the conditions under which AI-assisted domestic surveillance is permissible for national defense; audit and accountability mechanisms before any AI system is deployed in active combat operations; and the treatment of workers displaced by government-contracted AI automation, including the transition infrastructure that companies profiting from that displacement are obligated to fund.

On that last point, Amodei’s own words provide the mandate. If the technology will cause an “unusually painful” shock, if it will eliminate half of entry-level white-collar jobs and potentially drive unemployment to 20 percent, then the governance structure cannot treat labor displacement as an externality. It must be a central design constraint, as subject to binding rules and accountability as the question of whether AI can authorize a missile strike.

Conclusion

Dario Amodei has told us, in careful and specific terms, what is coming. AI will eliminate half of entry-level white-collar jobs. Unemployment could reach 20 percent. The disruption will be “unusually painful” and faster than society’s capacity to adapt. He has said this publicly. His company continues to build and deploy the technology as quickly as possible, regardless. That is not hypocrisy alone, it may be the only rational commercial choice in a highly competitive market. But it does mean that Anthropic’s claim to ethical leadership in the military domain cannot be taken at face value. A company that acknowledges it is causing a foreseeable economic and social catastrophe for millions of workers, and continues in that effort at at maximum speed, cannot simultaneously claim moral authority over where the harm of its technology ends, especially as it relates to the defense of our country from foreign adversaries.

The Anthropic-Pentagon standoff, playing out in real time against the backdrop of Operation Epic Fury, is a warning signal. It reveals that the United States has deployed transformative AI in its most sensitive national security contexts without building institutional infrastructure to govern it. The fix is not one company’s contracts or one founder’s conscience. It is the panel and the accountability structures that the moment demands, covering both the battlefield and the workforce, before the next crisis.

The post Selective Virtue: Anthropic, the Pentagon, and the Contradictions of AI Governance in Wartime appeared first on Small Wars Journal by Arizona State University.

Ria.city






Read also

Nvidia CEO Jensen Huang says the “most noble” career is this

CNN's Jake Tapper defends Jimmy Kimmel after telling ESPN to 'shut down' Aaron Rodgers for joking about Kimmel

Coast Guard wreck found after century, confirms World War I tragedy that killed all 131 aboard

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости