Your AI isn’t failing. Your org just can’t absorb it
A recent Wall Street Journal survey found a 38-point gap between how executives and employees experience AI at work. C-suite leaders report saving eight or more hours weekly. Two-thirds of front-line workers say the tools save them less than two hours—or nothing at all.
Most leaders read that as a rollout problem. A training problem. A communication problem.
It’s none of those things.
This month, a National Bureau of Economic Research study of 6,000 executives confirmed what the WSJ data was already pointing to: the vast majority are seeing no measurable productivity gains from AI. Not a small shortfall. A near-total disconnect between investment and results.
Here’s what I keep seeing when I work with leadership teams navigating this: the technology isn’t the problem. The problem is what organizations do—and don’t have—to absorb it.
Organizational immunity
Every organization has what I’d call an immune system. Embedded processes, governance structures, risk practices, cultural norms—all built to protect the existing operation. They do exactly what they were designed to do: reject things that don’t fit.
AI doesn’t fit most existing operating models. Not because it’s bad technology. Because the organizational architecture wasn’t built to run it.
Think about what happens when a company deploys AI. The pilot succeeds. Everyone celebrates. Someone gets budget to scale. And then—quietly, slowly—it stalls. Leaders say the technology underperformed. Or the team wasn’t ready. Or the timing was off.
What really happened: the organization lacked the mindsets, skillsets, and operating conditions to absorb what the pilot proved could work. The gap isn’t in the tools. It’s in the internal architecture required to put them to use at scale.
The potential
The evidence that this works when those conditions exist is real. EY’s 2025 Work Reimagined Survey, which drew on 15,000 employees across 29 countries, found that when AI is integrated properly—into actual workflows, with training, clear use cases, and psychological safety—productivity gains of up to 40% are achievable. The potential is there. For most organizations, the conditions for realizing it are not yet in place.
What makes the current moment particularly costly is that many organizations are actively dismantling what little absorption capacity they had.
The financial logic seems clean: cut headcount to fund AI investment. Reduce labor costs while increasing the capability of automated tools.
Except the people being cut are often the judgment layer—mid-level managers and senior individual contributors who know the context, catch the errors, translate AI outputs into actual decisions, and course-correct when something goes sideways. These are exactly the people whose expertise makes AI useful rather than risky.
You can’t automate judgment. And you can’t rebuild it quickly once it’s gone.
IBM’s chief human resources officer said recently that the company is tripling its entry-level hires in 2026. The reasoning was direct: eliminating early-career roles to fund AI creates a leadership pipeline problem that shows up years later. It’s a longer view than most organizations are currently taking—and it’s the right one.
The 38-point perception gap makes complete sense once you see this pattern. Executives use AI primarily as a thinking and communication tool. It genuinely helps at that level. Front-line employees are being asked to use it to do more work, faster, in environments where they’re already stretched—where no one has shown them what good looks like, and where a failed experiment feels career-threatening rather than instructive. The conditions for finding value don’t exist. So, no surprise, they don’t find it.
The plan
What should leaders actually do with this?
Three questions I’d be asking about your organization before the next planning cycle closes:
- Are you measuring the right thing? Most AI ROI metrics track usage—licenses deployed, training hours logged, features activated. None of that measures whether your organization can absorb what it’s deploying. The better question: can your people make better decisions, faster, because of AI? If you can’t answer that with specific evidence, you’re measuring the tool, not the outcome.
- Have you built the conditions for honest feedback? When employees fear that surfacing problems signals incompetence or redundancy, you lose the signal that would tell you what’s working. Resistance isn’t obstruction. It’s usually the most accurate diagnostic information you have. Organizations that treat it as intelligence—rather than an obstacle to manage—tend to catch implementation problems before they become write-offs.
- Does someone own the capability — not just the technology? Most organizations have a clear owner for AI infrastructure. Very few have someone accountable for building the organizational capability to use it well. Those are different problems. Confusing them is expensive.
The companies that will see real returns from AI in the next three years aren’t necessarily the ones investing the most. They’re the ones building the internal architecture to absorb what they deploy.
That’s not a technology decision. It’s a leadership one.