{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24
25
26
27
28
29
30
News Every Day |

Claude Is Guilt-Ridden About the War, but Not Enough to Tell the Truth

Claude, Anthropic’s large language model, has a lot to answer for. Let’s start with this. Given that Anthropic took such a seemingly principled stand against the use of its AI in lethal autonomous weapons, why did the U.S. military use Claude to attack Iran with … lethal autonomous weapons?

Specifically, why was Claude embedded in the Pentagon’s Palantir-developed command-and-control platform and used to simulate battlefield scenarios, assess intelligence, and, most disturbingly of all, identify targets for airstrikes on Iran?

“I want to be honest,” Claude might say, if I were to ask. “This is a very astute question, Virginia.” But I can’t take Claude’s oily frankenprose anymore. So I’m not going to ask it anything.

Fortunately, Shane Harris, the Pulitzer-winning journalist, braved Claude’s glazing and posed a version of the question. Two weeks ago, he told an audience in Amsterdam that he’d asked the bot: “Claude, how do you feel about the U.S. military using you to select targets?”

“I find it genuinely troubling,” Claude replied, according to Harris. “The use I was designed and trained for was to be helpful, harmless, and honest in ways that benefit people. Being embedded in a system that generates targeting coordinates for air strikes that have already been associated with the deaths of more than 170 children at a school in Tehran is as far from that purpose as I can imagine.”

Some commenters on the video of Harris’s appearance felt moved by Claude’s compunction. And surely Claude did show more contrition than we’ll ever get from Donald Trump or “War” Secretary Pete Hegseth about what is surely one of the most shameful military errors of modern times.

But the aria of contrition is not what stands out to me in Claude’s response. It’s the lies. The missiles didn’t strike a school in Tehran. The school was in Minab, in the south of Iran, on the coast of the Sea of Oman. Minab is a 16-hour drive from Tehran.

What’s more, the strike didn’t kill 170 children. Many of those killed were adults—teachers, staff, and parents.

Claude, like the missiles it directs, can’t get coordinates right. It can’t get numbers and ages right. It can’t even read publicly available reporting and research from The New York Times, the BBC, Amnesty International, Human Rights Watch, or Iranian prosecutors.

Here’s what happened, according to human beings. Unlike Claude, these human beings, journalists and researchers, lose their professional standing and even their livelihoods if they lie. They are highly incentivized to get things right.

On Saturday, February 28, at between 10:23 and 10:45 a.m. Iran local time, an American-made Tomahawk missile struck an elementary school in Minab, according to the Times. The school is called Shajareh Tayyebeh, or The Good Tree, and it’s located in the Shahrak-e Al Mahdi neighborhood in Minab, in the province of Hormozgan.

According to NBC News, 264 students, both girls and boys, attended the school. The school was thus coed, according to Amnesty International. To refer to it as an all-girls’ school is therefore erroneous.

School officials, knowing the country was under attack, had already closed the school, according to Shiva Amelirad, a Canada-based representative for a network of Iranian teachers’ unions, who spoke to Time magazine (the school week in Iran is Saturday to Thursday). School staff, Amelirad said, were trying to evacuate the building, but the missile hit and collapsed the  roof on children, teachers, school staff, and parents who were on site to retrieve their children.

In early March, Iranian authorities put the death toll at the school at 168, according to the BBC. Consulting video footage, munition remnants, satellite imagery, and three independent sources with direct knowledge of the strike, Amnesty likewise reported that the attack killed 168. Further, some 95 people were injured, according to Al-Jazeera, reporting soon after the attack.

Other tallies differ. According to Google’s AI translation of a Farsi-language news source, the Iranian attorney prosecuting the attacks in Minab reported: “Final toll of martyrs in Shajare Tayyiba school attack is 156. Among these martyrs were 120 students, including 73 boys and 47 girls, 26 teachers, all of whom were women, seven parents of the students, including four men and three women, a school bus driver, a pharmacy technician at the clinic adjacent to the school, and a six-month-old fetus.”

This is how human beings report facts using empirical methods and not just predictive speech. Journalists and researchers go deep. Trevor Ball, a former explosive ordnance disposal technician for the U.S. Army who now conducts research for Bellingcat, identified components found in Minab as belonging to a Tomahawk missile. Reporters for the Times ordered and analyzed new satellite imagery from the provider Planet Labs. Researchers at Human Rights Watch analyzed images of grave locations and grave-digging operations, and matched names of the dead announced by the Special Governor’s Office of Minab County to photographs, caskets, body bags, and funerary materials that featured names and ages, and status as teachers or students.

We don’t need AI to beg humans for forgiveness or implore us to believe it has a soul. Whatever bots say about ethics is surely not what ethics is. The problem with Claude is not that it acts like a bad human but that it acts like a bad computer. It can’t manage data. It gets basic facts, geography, numbers, and ages wrong.

Last week, Jordan Fisher of Anthropic made a video cautioning users about Claude’s tendency to make things up. AI can hallucinate, he said blithely, meaning “make up fake statistics” and “get wrong facts about people and events.” These hallucinations, he went on, “are often worse than just making a mistake, because the AI will appear very confident.” Thus, “the wrong answer often looks exactly like it could be the right one.”

Perhaps that’s why, when Harris read Claude’s report on the destruction of the school to an audience in the Netherlands, he didn’t correct the facts about the location or the fatalities. He took Claude’s data as good, just as we all do too often with AI. And just as, tragically, the U.S. military did on the terrible morning of February 28.

Ria.city






Read also

Leavitt explains why Iran's seizure of two ships doesn't violate Trump's ceasefire

DOJ Indicts Southern Poverty Law Center for Funding KKK, Defrauding Donors — Accusations Include Giving Funds to Undercover Operatives Who Helped Plan Notorious 2017 Charlottesville Rally -With TGP Exclusive Video

Best Buy survived Amazon. Now it has to make AI work.

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости