Twenty seconds to approve a military strike; 1.2 seconds to deny a health insurance claim. The human is in the AI loop. Humanity is not
In the first twenty-four hours of the war with Iran, the United States struck a thousand targets. By the end of the week, the total exceeded three thousand — twice as many as in the “shock and awe” phase of the 2003 invasion of Iraq, according to Pete Hegseth. This unprecedented number of strikes was made possible by artificial intelligence. U.S. Central Command (CENTCOM) insists that humans remain in the loop on every targeting decision, and that the AI is there to help them to make “smarter decisions faster.” But exactly what role humans can play when the systems are operating at this pace is unclear.
Israel’s use of AI-enabled targeting in its war on Hamas may offer some insights. An investigation last year reported that the Israeli military had deployed an AI system called Lavender to identify suspected militants in Gaza. The official line is that all targeting decisions involved human assessment. But according to one of Lavender’s operators, as the humans involved came to trust the system, they limited their own checks to nothing more than confirming that the target was a male. “I would invest 20 seconds for each target,” the operator said. “I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”
The same pattern has already taken hold in business. In 2023, ProPublica revealed that Cigna, one of America’s largest health insurers, had deployed an algorithm to flag claims for denial. Its physicians, who were legally required to exercise their clinical judgment, signed off on the algorithm’s decisions in batches, spending an average of 1.2 seconds on each case. One doctor denied more than 60,000 claims in a single month. “We literally click and submit,” a former Cigna doctor said. “It takes all of 10 seconds to do 50 at a time.”
Twenty seconds to approve a strike; 1.2 seconds to deny a claim. The human is in the loop. Humanity is not.
Difficulty by Design
The novelist Milan Kundera writes of the terrifying weight of being confronted with the enduring seriousness of our actions. But while lightness might seem attractive in the face of this impossibly heavy burden, it is ultimately unbearable. Disconnection from the weightiness of our decisions deprives them of substance, of meaning.
Some things are important enough that we ought to feel their weight. It ought to take time to decide to kill a person or deny a healthcare claim. It ought to be difficult to figure out which buildings to bomb. AI makes those decisions quicker and easier – but some decisions ought to be hard. And when AI lifts the weight, when it takes away the burden of making decisions about who lives and who dies, this is not progress. This is moral degradation.
AI promises to lift the burden of difficult and cognitively demanding work — it makes the work lighter. In many domains, that is genuine progress. But some things are important enough that we ought to feel their weight. It ought to take time to decide to kill a person or deny a healthcare claim. It ought to be difficult to figure out which buildings to bomb. In such decisions, the difficulty serves a function — it is a feature, not a bug. It is a mechanism that forces institutions to reckon with what they are doing. And when AI removes that weight, the institution doesn’t become more efficient. It becomes numb.
If the human in the loop is spending mere seconds on each decision, then the question of whether the system is autonomous or human-supervised becomes largely semantic. We need to insist on humanity in the loop as well. In cases like these, the human must be allowed to be human, even if that means they are slower, less accurate, and less efficient. That is the cost we pay for something absolutely necessary: We need the human to feel the weight of the decisions they are making, because difficulty creates the friction that makes people pause, question, and push back.
Institutional Culture
When hard decisions become easy, the institution itself changes. People stop questioning because there is nothing that feels worth questioning — the system has already decided and the human’s role is to confirm. Dissent drops because dissent requires friction, and friction has been engineered out. Accountability is undermined because everyone knows that it’s the computer that’s making the decisions.
The Cigna physician who denied 60,000 claims in a month was not cruel. They had been placed in a system where denying a claim required no more effort than clicking a button. The system did something more insidious than corrupt their judgment — it made it unnecessary. That is why the Cigna case is not a story about a single bad actor. Rather, it is a story about what happens to any institution that systematically engineers the weight out of its hardest decisions.
The Cost of Hollowing Out Accountability
Hollowed-out accountability has a cost that shows up in three places for businesses.
First, liability. An algorithm cannot be sued, fired, or held responsible for its errors. The organization that deployed it can. Rubber-stamp oversight is not a legal gray area — it is a liability waiting for lawyers to mobilize.
Second, institutional fragility. When humans stop genuinely engaging with decisions, they stop learning from them. When the machine always seems to get things right, no one develops the kind of judgment needed to determine when it is actually wrong. Organizations that optimize humans out of their decision loops become dependent on systems they no longer fully understand. And this leads to brittleness in precisely the moments that demand resilience.
Third, trust. Customers, employees, and regulators may want to know whether an AI made a decision. But they will definitely want to know if anyone is truly responsible for it. The answer, in too many organizations, is no, and that answer has deep consequences for the organization’s relationships with those it is answerable to.
The Weight Test
Before using AI to make any decision process easier, leaders should ask four questions:
1. What institutional behaviors does the current difficulty of this decision produce — e.g., scrutiny, escalation, dissent — and what is the cost of losing them?
2. If something goes wrong, can we identify someone who wrestled with the decision — or only someone who clicked approve?
3. How would we know if the humans in this process have become rubber stamps? What would we measure, and are we measuring it?
4. If the people affected by this decision learned exactly how it was made and how long the human spent on it, would the institution be comfortable defending that process in public?
These questions won’t appear in any AI vendor’s implementation checklist. That is precisely why they matter.
Conclusion
We are told that AI liberates us — from drudgery, from slow processes, from the burden of hard decisions. And often it does. But not every burden is a problem to be solved. Sometimes, the burdens are the point. The weight a commander should feel before authorizing a strike, the effort a physician expends before denying care — these are not inefficiencies to be optimized away. They are the mechanisms that keep institutions honest about the power they exercise.
Of course, organizations that engineer that weight away will be faster and lighter. For a while, it may even appear like they are winning. But these organizations will also be the ones that discover, too late, that the difficulty was the price of being the one who decides — and the moment an organization stops paying it, it has no business deciding at all.