{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Olson: Claude AI helped bomb Iran. But how exactly?

The same artificial-intelligence model that can help you draft a marketing email or a quick dinner recipe has also been used to attack Iran. U.S. Central Command used Anthropic’s Claude AI for “intelligence assessments, target identification and simulating battle scenarios” during the strikes on the country, according to a report in the Wall Street Journal.

Hours earlier, President Donald Trump had ordered federal agencies to stop using Claude after a dispute with its maker, but the tool was so deeply baked into the Pentagon’s systems that it would take months to untangle in favor of a more compliant rival. It was used, too, in the January operation that led to the capture of Nicolás Maduro.

But what does “intelligence assessments” and “target identification” mean in practice? Was Claude flagging locations to strike or making casualty estimates? Nobody has made that disclosure and, alarmingly, no one has an obligation to.

Artificial intelligence has long been used in warfare for things like analyzing satellite imagery, detecting cyber threats and guiding missile-defense systems. But the use of chatbots — the same underlying technology that billions use for mundane tasks like writing emails — is now being used on the battlefield.

Last November, Anthropic partnered with Palantir Technologies Inc., a data-analytics company that does a lot of work for the Pentagon, turning its large language model Claude into the reasoning engine inside a decision-support system for the military.

Then, in January, Anthropic submitted a $100 million proposal to the Pentagon to develop voice-controlled autonomous drone swarming technology, Bloomberg News reported. The company’s pitch: Use Claude to translate a commander’s intent into digital instructions to coordinate a fleet of drones.

Its bid was rejected, but the contest called for much more than just summarizing intelligence reports, as you might expect a chatbot to do. This contract was to develop “target-related awareness and sharing,” and “launch to termination” for potentially lethal drone swarms.

No man’s land

Remarkably, all of this has been happening in a regulatory vacuum and with technology that is known to make errors. Hallucinations by large language models are a result of their training, when they are rewarded for grasping for an answer instead of admitting uncertainty. Some scientists say the persistent challenge of AI confabulation may never be fixed.

This would not be the first time unreliable AI systems have been used in warfare. Lavender was an AI-driven database used to help identify military targets associated with Hamas in Gaza. It was not a large language model but analyzed vast amounts of surveillance data, such as social connections and location history, to assign each individual a score from 1 to 100. When someone’s score passed a certain threshold, Lavender flagged them as a military target.

The problem was that Lavender was wrong 10% of the time, according to an investigative report published by the Israeli-Palestinian outlet +972. “Around 3,600 people were targeted by mistake,” Mariarosaria Taddeo, a professor of digital ethics and defense technology at the Oxford Internet Institute, tells me.

“There are such incredible vulnerabilities in these systems and such extreme unreliability… for something so dynamic, sensitive and human as warfare,” says Elke Schwarz, a professor in political theory at Queen Mary University London and author of Death Machines: The Ethics of Violent Technologies.

Schwarz points out that AI is often used in war to speed things up, a recipe for unwanted outcomes. Faster decisions are made at a greater scale and with less human scrutiny. The last decade and a half has seen military use of AI become even more opaque, she says.

And secrecy is baked into how AI labs operate even before the warfare applications. These companies refuse to disclose what data their models are trained one or how their systems reach conclusions.

Of course, military operations often have to be kept under wraps to protect combatants and keep enemies off the scent. But defense is heavily regulated by international humanitarian law and weapons testing standards, which in theory should also address the use of artificial intelligence. Yet such standards are missing or woefully inadequate.

Rules outdated

Taddeo notes that Article 36 of the Geneva Convention requires new weapons systems to be tested before deployment, but an AI system that learns from its environment becomes a new system every time it updates. That makes it almost impossible to apply the rule.

In an ideal world, governments like the U.S. would disclose how these systems are used on the battlefield, and there is a precedent. The Americans started using armed drones after 9/11 and expanded their use under the Barack Obama administration, refusing to acknowledge that such a program existed.

It took nearly 15 years of leaked documents, sustained pressure from the press and lawsuits from the American Civil Liberties Union before the Obama White House finally published in 2016 the casualty numbers from drone strikes. They were widely seen as under-counting, but they allowed the public, Congress and media to hold the government accountable for the first time.

AI’s policing be will harder still, requiring even more public and legislative pressure to force a recalcitrant Trump administration to create a similar kind of reporting framework.

The goal wouldn’t be to disclose exactly how Claude was used in something like Operation Epic Fury, but to release the broad contours, according to Schwarz. And, especially, to disclose when something goes wrong.

The current public debate about the Anthropic-Pentagon feud — about what is legal and ethical for AI when it comes to the mass surveillance of Americans or creating fully autonomous weapons — is missing the bigger question about the lack of visibility of how the technology is already being used in war. With such new and untested systems prone to making mistakes, this is sorely needed. “We haven’t decided as a society if we’re fine with a machine deciding if a human being should be killed or not,” says Taddeo.

Pushing for that transparency is critical before AI in warfare becomes so routine that nobody thinks to ask anymore. Otherwise we may find ourselves waiting for a catastrophic mistake, and imposing transparency only after the damage is done.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “Supremacy: AI, ChatGPT and the Race That Will Change the World.” ©2026 Bloomberg L.P. Visit bloomberg.com/opinion. Distributed by Tribune Content Agency.

Ria.city






Read also

Sources: Chelsea contact 25-Premier-League-goal star over surprise transfer

Pakistan army says 15 militants killed in Balochistan operations

One platform gives you lifetime access to Gemini, ChatGPT, Anthropic, and more for $70

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости