{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4 5 6 7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

Twenty seconds to approve a military strike; 1.2 seconds to deny a health insurance claim. The human is in the AI loop. Humanity is not

In the first twenty-four hours of the war with Iran, the United States struck a thousand targets. By the end of the week, the total exceeded three thousand — twice as many as in the “shock and awe” phase of the 2003 invasion of Iraq, according to Pete Hegseth. This unprecedented number of strikes was made possible by artificial intelligence. U.S. Central Command (CENTCOM) insists that humans remain in the loop on every targeting decision, and that the AI is there to help them to make “smarter decisions faster.” But exactly what role humans can play when the systems are operating at this pace is unclear.

Israel’s use of AI-enabled targeting in its war on Hamas may offer some insights. An investigation last year reported that the Israeli military had deployed an AI system called Lavender to identify suspected militants in Gaza. The official line is that all targeting decisions involved human assessment. But according to one of Lavender’s operators, as the humans involved came to trust the system, they limited their own checks to nothing more than confirming that the target was a male. “I would invest 20 seconds for each target,” the operator said. “I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”

The same pattern has already taken hold in business. In 2023, ProPublica revealed that Cigna, one of America’s largest health insurers, had deployed an algorithm to flag claims for denial. Its physicians, who were legally required to exercise their clinical judgment, signed off on the algorithm’s decisions in batches, spending an average of 1.2 seconds on each case. One doctor denied more than 60,000 claims in a single month. “We literally click and submit,” a former Cigna doctor said. “It takes all of 10 seconds to do 50 at a time.”

Twenty seconds to approve a strike; 1.2 seconds to deny a claim. The human is in the loop. Humanity is not.

Difficulty by Design

The novelist Milan Kundera writes of the terrifying weight of being confronted with the enduring seriousness of our actions. But while lightness might seem attractive in the face of this impossibly heavy burden, it is ultimately unbearable. Disconnection from the weightiness of our decisions deprives them of substance, of meaning.

Some things are important enough that we ought to feel their weight. It ought to take time to decide to kill a person or deny a healthcare claim. It ought to be difficult to figure out which buildings to bomb. AI makes those decisions quicker and easier – but some decisions ought to be hard. And when AI lifts the weight, when it takes away the burden of making decisions about who lives and who dies, this is not progress. This is moral degradation.

AI promises to lift the burden of difficult and cognitively demanding work — it makes the work lighter. In many domains, that is genuine progress. But some things are important enough that we ought to feel their weight. It ought to take time to decide to kill a person or deny a healthcare claim. It ought to be difficult to figure out which buildings to bomb. In such decisions, the difficulty serves a function — it is a feature, not a bug. It is a mechanism that forces institutions to reckon with what they are doing. And when AI removes that weight, the institution doesn’t become more efficient. It becomes numb.

If the human in the loop is spending mere seconds on each decision, then the question of whether the system is autonomous or human-supervised becomes largely semantic. We need to insist on humanity in the loop as well. In cases like these, the human must be allowed to be human, even if that means they are slower, less accurate, and less efficient. That is the cost we pay for something absolutely necessary: We need the human to feel the weight of the decisions they are making, because difficulty creates the friction that makes people pause, question, and push back.

Institutional Culture

When hard decisions become easy, the institution itself changes. People stop questioning because there is nothing that feels worth questioning — the system has already decided and the human’s role is to confirm. Dissent drops because dissent requires friction, and friction has been engineered out. Accountability is undermined because everyone knows that it’s the computer that’s making the decisions.

The Cigna physician who denied 60,000 claims in a month was not cruel. They had been placed in a system where denying a claim required no more effort than clicking a button. The system did something more insidious than corrupt their judgment — it made it unnecessary. That is why the Cigna case is not a story about a single bad actor. Rather, it is a story about what happens to any institution that systematically engineers the weight out of its hardest decisions.

The Cost of Hollowing Out Accountability

Hollowed-out accountability has a cost that shows up in three places for businesses.

First, liability. An algorithm cannot be sued, fired, or held responsible for its errors. The organization that deployed it can. Rubber-stamp oversight is not a legal gray area — it is a liability waiting for lawyers to mobilize.

Second, institutional fragility. When humans stop genuinely engaging with decisions, they stop learning from them. When the machine always seems to get things right, no one develops the kind of judgment needed to determine when it is actually wrong. Organizations that optimize humans out of their decision loops become dependent on systems they no longer fully understand. And this leads to brittleness in precisely the moments that demand resilience.

Third, trust. Customers, employees, and regulators may want to know whether an AI made a decision. But they will definitely want to know if anyone is truly responsible for it. The answer, in too many organizations, is no, and that answer has deep consequences for the organization’s relationships with those it is answerable to.

The Weight Test

Before using AI to make any decision process easier, leaders should ask four questions:

1. What institutional behaviors does the current difficulty of this decision produce — e.g., scrutiny, escalation, dissent — and what is the cost of losing them?

2. If something goes wrong, can we identify someone who wrestled with the decision — or only someone who clicked approve?

3. How would we know if the humans in this process have become rubber stamps? What would we measure, and are we measuring it?

4. If the people affected by this decision learned exactly how it was made and how long the human spent on it, would the institution be comfortable defending that process in public?

These questions won’t appear in any AI vendor’s implementation checklist. That is precisely why they matter.

Conclusion

We are told that AI liberates us — from drudgery, from slow processes, from the burden of hard decisions. And often it does. But not every burden is a problem to be solved. Sometimes, the burdens are the point. The weight a commander should feel before authorizing a strike, the effort a physician expends before denying care — these are not inefficiencies to be optimized away. They are the mechanisms that keep institutions honest about the power they exercise.

Of course, organizations that engineer that weight away will be faster and lighter. For a while, it may even appear like they are winning. But these organizations will also be the ones that discover, too late, that the difficulty was the price of being the one who decides — and the moment an organization stops paying it, it has no business deciding at all.

Ria.city






Read also

Republican Congressman Caught Asking Another Staffer for Nudes

Embattled MAGA rep pressured another staffer to send nudes: report

Trump drops 51st state idea over ties to King Charles and 'all that 'Oh, Canada' thing': royal biographer

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости