{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4 5 6 7 8 9 10 11 12 13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

There is no battlefield exception to human-centered AI

Stanford has built one of the world’s most influential brands around human-centered AI. From the start, the Stanford Institute for Human-Centered AI (HAI) asserted that AI should augment people’s work, not replace them. At its “AI in the Loop” conference, HAI faculty argued that humans must remain at the center of decision-making. That governing principle of human-centered AI must be considered most carefully when it comes to war, where the stakes are lethal. 

Anthropic’s Claude became the first AI model approved to operate on classified military networks, and Anthropic says it is extensively deployed across the U.S. military for analysis and operations. Palantir’s Maven system, used for intelligence analysis and weapons targeting, included prompts and workflows built with Claude Code.

When the Trump Administration designated Anthropic a supply-chain risk after a dispute over safety guardrails, Pentagon users resisted the phaseout, estimating that recertification could take 12 to 18 months, demonstrating AI’s early entrenchment in our military. As that phaseout drags, the Pentagon is now solidifying Palantir’s Maven as their go-to AI system. 

According to Reuters, American investigators believe U.S. forces to be responsible for the strike on the school in Minab, Iran, which reportedly killed 168 school affiliates, including schoolchildren. The Washington Post reported that the school was on a U.S. target list and that Maven (powered in part by Claude) was used to suggest targets, issue precise coordinates and prioritize them by importance.

It is not clear whether Maven nominated the school specifically; a former senior defense official cautioned the Post against assuming it did, noting the site had likely been on U.S. target lists for years. A preliminary Pentagon review cited by the Post indicates the strike was probably the result of an intelligence error about the location, and a full investigation continues.

Nonetheless, the episode points to a larger problem: once AI systems are integrated into operational workflows, their outputs carry undue weight. Regardless of who or what first nominated the school, the chain that approved it did not catch that the underlying intelligence was stale in a military campaign whose pace and scale Maven is theoretically built to enable. People have begun to defer to AI systems even after machine errors should have raised alarms. 

This overreliance is documented among average AI users. University of Pennsylvania (UPenn) Wharton researchers recently found that when an AI assistant was wrong, users still followed it about 3/4 of the time and reported higher confidence in their answers even after it led them astray. In a productivity app, that is frustrating. In war, it becomes a tragedy.

Stanford has already studied a similar failure in cars. When a person resumes control after automation, they need a measurable period to readapt before their hands behave safely enough to navigate the road. The National Transportation Safety Board warns that drivers are susceptible to automation complacency, and the Insurance Institute for Highway Safety found that only one of 14 partial automation systems it tested earned an acceptable safeguard rating. In other words, partial automation can put a human to sleep at the wheel without taking the wheel away.

Military AI creates the same trap, only with lethal stakes. A human can remain “in the loop” while losing control over the reasoning that matters. When the UPenn researchers added a 30-second time limit to a person’s designated activity, it roughly tripled the odds that users would surrender to a faulty AI rather than override it.

Once a system has framed the picture and compressed the timeline, the person at the end of the chain may still approve the action but fail to actually pass judgment themselves. The head of U.S. Central Command has said that “humans will always make final decisions on what to shoot and what not to shoot.” But a human signature is not the same thing as human control.

In 2024, HAI researchers put five off-the-shelf large language models through military and diplomatic simulations. All five demonstrated forms of escalation that were difficult to predict.  That finding should concern anyone who thinks the answer is simply to keep a human somewhere nearby. Without enough time and context, the human does not challenge the machine. The human ratifies it.

None of this means AI can never be useful. The International Committee of the Red Cross (ICRC) argues that AI tools can help humans synthesize information, offer a wider range of context to a decision and reduce harm to civilians. But the ICRC also warns that, in legally mandated decisions on lawful targeting, AI outputs may inform but must not replace human judgment, or else the human risks becoming a “human rubber stamp.” Even the U.S. military’s own directive says commanders and operators must exercise appropriate levels of human judgment over the use of force. Anthropic itself says today’s models are not reliable enough for fully autonomous weapons.

Stanford, too, has entered the debate. HAI faculty and fellows have called for explicit policy on AI-designed weaponry usage and warned of AI’s power for individual surveillance, information key to military operations. HAI is well-placed to publish the conditions under which human-centered AI principles call for constraining a military application and name the categories of judgment that should not be so automated to erode human control. For example, adding friction to the design to avoid inappropriate reliance on automation. “Augment, don’t replace” works only if it means more than leaving a person at the end of an opaque, machine-built process without time to question or refuse its conclusion.

We do not yet know everything about Minab, but we know enough about AI acceleration to reject the fiction that a human is in charge simply because they are present. In war, the cost of substituting a person with technology is too high to risk. Stanford helped define human-centered AI, the very thing meant to resist the dangers that we’re watching play out in the U.S. military. Now is the time to prove that the definition holds.

Utsav Gupta is a Stanford Master of Liberal Arts candidate researching human purpose in the age of AI. He is founder and co-CEO of Filarion, building AI litigation tools and spatial computing systems, and serves as commissioner for Palo Alto Utilities. Views are his own.

The post There is no battlefield exception to human-centered AI appeared first on The Stanford Daily.

Ria.city






Read also

Rory McIlroy, la imagen de un triunfador

Amazon’s Leo satellite internet targets mid-2026, but it’s already behind

Snap's Evan Spiegel says his work schedule is 'completely insane' — but he tries to reserve Sundays for family time

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости