Stop using AI as a scapegoat, and do this instead
As AI becomes more advanced in quality, leaders are increasingly invoking AI to justify unpopular decisions like layoffs. However, much of that story collapses under scrutiny, and workers know it. This gap between rhetoric and reality is eroding trust. This amplifies inequities and quietly sets organizations up for long-term cultural and performance damage.
Author, speaker, and strategist Lily Zheng sees a clear pattern: executives are using AI to explain decisions that are in fact driven by past mistakes, investor pressure, or leadership preference. Companies that went on aggressive hiring sprees during the pandemic are now quietly “correcting” courses. They’re framing workforce reductions as bold AI-driven reinventions rather than acknowledging strategy missteps. Afterwards, they say that they’re “seeking productivity gains through AI. This sounds more sophisticated than “Oops, we hired too many people based on flawed assumptions.”
Employees, however, live the truth that those narratives obscure. As Zheng notes, “they know firsthand that the bullish stance their corporate PR is putting out on AI and productivity is by no means reflected by reality.” When leaders insist that layoffs are due to AI efficiency, employees recognize that this can be anything from spin to outright cynicism. The emotional impact is real, and it results in a steep erosion in trust and morale that only becomes apparent in engagement scores, productivity data, and retention.
The cultural cost of “AI made me do it.”
Blaming AI for difficult choices hits workplace culture hard. When leaders offload responsibility onto “the algorithm,” they sidestep accountability for those who suffered job cuts, whose workloads intensify, and whose careers stall. Emerging research suggests that while only a minority of organizations have truly eliminated roles because AI is doing the work, far more are using AI as a rhetorical cover for broader cost-cutting or restructuring decisions.
Just like the branding of partially-automated driving as “self-driving” led to drivers to completely take their attention off the road, the cynical branding of AI as “a replacement for people” is driving executives to completely abdicate their responsibility as leaders. This has disastrous results.
The message to employees is clear: Leadership will tell whatever story is convenient to them, no matter what the data says. That perception disproportionately harms those who already depend on transparent processes and fair criteria to access opportunity.
We’ve seen this film before
Zheng draws a stinging parallel to the hybrid work backlash. Studies have found that well-designed hybrid models can deliver equivalent productivity with significantly lower attrition—often around a one-third reduction in resignations—especially for women, caregivers, and people with long commutes.
Yet many leaders reverted to command-and-control models like imposing rigid return-to-office mandates despite the evidence. Some doubled down with digital surveillance tools that actually reduce productivity, as employees redirect energy into gaming the system and managing perceptions rather than doing meaningful work.
Zheng’s point is that the same pattern is now playing out with AI. Instead of reimagining management practices, metrics, and culture to harness AI responsibly, leaders are using it to prop up familiar but ineffective habits—whether that’s centralized control, presenteeism, and blunt cost-cutting. In both cases, leaders prioritize what feels familiar over what data and research say is actually effective.
From empowerment to “workslop.”
We need to be more honest about what AI can and cannot do. Large language models might be powerful statistical inference tools that are ideal for complex, pattern-heavy tasks with abundant data. But they are not magical “do my job” buttons. When leaders forget this, their policies inadvertently incentivize workers to produce AI “workslop.”
Fresh research shows that workslop destroys productivity: people spend time correcting unhelpful drafts, redoing incomplete analyses, and untangling confusing memos instead of advancing core work.
To make matters even more déjà vu-inducing, some leaders are mandating a minimum number of hours of “AI usage” as if this were a performance metric. Zheng likens this to ordering construction workers to use a sledgehammer, even when the building renovation doesn’t require it. The tool becomes the goal, rather than the quality of the house.
Zheng highlights a critical, often-missed prerequisite in creating a healthy culture around AI: strong management fundamentals. Leaders need clear norms around accountability—“you own the quality of your output, regardless of tools”—as well as transparent decision-making, and outcome-based performance metrics. Without these, AI simply magnifies existing dysfunctions.
The key to building fair, worker-centered AI practices
Given its well-documented propensity to propagate bias, it’s important to build fairness into AI strategy, too. The leaders who are getting this right aren’t handing out generic subscriptions and hoping for the best. They’re building and fine-tuning their own models on carefully audited, domain-specific data.
Zheng recommends the following practices:
- Rigorously audit your internal data for bias. You need to make sure it’s clean of identifiable information and ensures compliance with regulations before training AI models.
- Treat AI outputs as drafts, not decisions, and make it explicit that humans remain fully accountable for outcomes.
- Involve organizational design experts, people practitioners, frontline employees, and legal in AI governance and tool selection, not just IT and finance.
- Avoid vanity metrics like “AI hours used” and measure value in terms of quality, equity, customer outcomes, and worker well-being.
- Build channels for employees to flag AI-related harms, workload inequities, or biased outputs—and act on that feedback.
These practices not only mitigate risk but also create the norms and foundation to leverage AI safely and effectively. They signal that companies will hold technology to the same standard of accountability as any other business decision.
A more honest and inclusive AI story
There are some positive examples of organizations reinvesting AI-driven productivity gains into upskilling, innovation, and better jobs rather than headcount cuts. Zheng sees similar bright spots where leaders are talking about AI as a way to help “our strongest assets—our people—do more,” rather than as an excuse to declare half the workforce disposable.
The real leadership test is not whether you are using AI, but how honestly you narrate its role and who benefits from it. And for future-minded leaders, that means resisting the temptation to let AI absorb blame. What they should do instead is stand squarely in the discomfort of complex, three-dimensional decisions that involve tools and people.