{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

From the Community | Anthropic v. DoW is a wakeup call: Stanford must lead on AI and democracy

Avi Parrack is a Ph.D. student in physics and the president of Stanford Effective Altruism.

The Department of War’s (DoW) actions toward Anthropic reveal fault lines in our democratic guardrails and portend greater stresses to come as we grapple with how to preserve a balance of power in the face of unprecedented AI capabilities. If any community has the standing and the expertise to lead that effort, it is Stanford. 

On Feb. 27, Secretary of War Pete Hegseth posted on X that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” and announced his intention to designate the company a “supply chain risk to national security,” a classification historically reserved for foreign adversaries like China’s Huawei and never before applied to an American company. That designation has since been made official, prompting Anthropic to plan a challenge in court.

The episode is an abuse of power by the Department of War, and the questions it raises about AI in military operations and domestic surveillance are too consequential to be settled by executive action alone. They demand congressional oversight and public deliberation.

Anthropic, the AI company founded by Stanford alumnus Dario Amodei ’06, was a key AI provider on the Pentagon’s classified networks. The company’s contract included two restrictions: first, that Claude would not be used for mass domestic surveillance of American citizens; second, that Claude could not be used in fully autonomous weapons without human oversight, meaning Claude cannot authorize strikes without a human making the final call.

The DoW wanted those restrictions gone, arguing that it was not the place of a private company to restrict government use of AI and that Anthropic’s redlines should be eliminated to allow for “all lawful use.” Anthropic replied that “we cannot in good conscience accede.” The company argued that, while technically legal, domestic surveillance powered by AI would be antithetical to American values and that AI systems could not be trusted to make wartime decisions without a human involved.

An agreement could not be reached. But the DoW then moved to attack its business with a supply chain risk designation that even Dean Ball, a former AI policy advisor to the Trump administration, called “obviously a psychotic power grab” that is “almost surely illegal.” The order has drawn condemnation from the political right and left alike. 

When the government disagrees with a contractor, it should cancel the contract. It should not weaponize national security designations against a homegrown company. This sets a deeply un-American precedent of top-down control: if a private company negotiates terms the government dislikes, will it be punished? If we had read this story with the proper nouns redacted, we’d be forgiven for assuming it described a classic maneuver from Beijing.

Clearly, the substance of the Pentagon’s demands is not merely academic and should be spurring serious public discussion. Claude was reportedly integrated into U.S. targeting workflows in Iran, with the military striking over a thousand targets in the first twenty-four hours of operations. The same system the government labeled a national security risk is in the kill chain. For its part, Anthropic stood to lose a $200 million contract, and potentially its relationships with every defense contractor in the country.

Yet the company held to its redlines anyway because, as Amodei explained, today’s AI systems are simply too unreliable to be trusted with life-and-death decisions. When a company forfeits financial gain and absorbs political punishment to warn you that its own product is not ready, we ought to listen. 

Regarding the mass surveillance of American citizens, we know that AI has made it possible to collect social media posts, geolocation data and financial records from data brokers and assemble them into comprehensive surveillance profiles at a scale never before feasible. None of this is currently illegal. The Fourth Amendment requires a warrant to search people, but federal agencies can already buy data from commercial brokers without one. The law did not anticipate what today’s and tomorrow’s AI systems would make possible. The Pentagon should not ask us to trust that it will choose restraint when no statute requires it.

Military contracts and Twitter ultimatums are not the proper channels to determine the scope of government power over its citizens or corporations. The radical possibilities of AI will require thoughtful consideration and legislation in order to preserve our checks and balances. 

The DoW’s actions represent a clear and dangerous overreach, incompatible with the democratic principles this country was built upon. The tech community is responding: over 200 founders, engineers, investors and executives have signed an open letter urging the DoW to withdraw the designation and Congress to examine whether such “extraordinary authorities” should be used against an American technology company.

We can disagree about where the lines on autonomous weapons and domestic surveillance should be drawn. We can disagree about whether Anthropic’s restrictions were justified or inappropriate. But we cannot accept a government that retaliates against private enterprise for adhering to their contracts. And we cannot accept those decisions being made in boardrooms, without public input, when the technology is already being used in combat.

It is time for the public to weigh in and for Congress to legislate to restrict the use of AI in military operations and domestic surveillance. There is no institution better equipped to lead that effort than Stanford, which sits at the intersection of the technical frontier and some of the most serious legal scholarship in the country. Shoring up our democracy amid rapid developments in AI will require the urgency of a startup and the rigor of a courtroom. Stanford has both. 

We will have to look to the future with the foresight of our founding fathers, updating our legal and legislative frameworks to preserve liberty and the separation of powers in an era of autonomous weapons, mass surveillance and automation. Students and faculty alike should hear Anthropic v. DoW as a starting gun, calling us to get involved or to redouble our speed and diligence in building the democratic architecture that this century will demand.

I trust I speak for many of us when I say it’s a call we intend to answer.

The post From the Community | Anthropic v. DoW is a wakeup call: Stanford must lead on AI and democracy appeared first on The Stanford Daily.

Ria.city






Read also

‘Horrible watch’ – Aldridge reacts to Liverpool defeat against Galatasaray

Former Democratic mayor goes against the grain in backing school choice, calls for party to embrace it

The ‘Shield of the Americas’ logo turns foreign policy into Trump’s personal brand

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости