AI can tank teams’ critical thinking skills. Here’s how to protect yours
AI is transforming how teams work. But it’s not just the tools that matter. It’s what happens to thinking when those tools do the heavy lifting, and whether managers notice before the gap widens.
Across industries, there’s a common pattern. AI-supported work looks polished. The reports are clean. The analyses are structured. But when someone asks the team to defend a decision, not summarize one, the room goes quiet. The output is there, but the reasoning isn’t owned.
For David, the COO of a midsize financial services firm, the problem surfaced during quarterly planning. Multiple teams presented the same compelling statistic about regulatory timelines, one that turned out to be wrong. It had come from an AI-generated summary that blended outdated guidance with a recent policy draft. No one had checked it. No one had questioned it. It simply sounded right.
“We weren’t lazy,” David told us. “We just didn’t have a process that asked us to look twice.”
Through our work advising teams navigating AI adoption, Jenny as an executive coach, learning and development designer, and Noam as an AI strategist, we have seen a clear distinction: there are teams where AI flattens performance, and teams where it deepens it. The difference isn’t whether AI is allowed. It’s whether judgment is designed back into the work.
In good news, teams can adopt practices to shift from producing answers to owning decisions. This new way of thinking doesn’t slow things down. It moves performance to where it actually matters—and protects the judgment that no machine can replace in the process.
1. The Fact Audit: Question AI’s Output
AI produces fluent language. That’s exactly what makes it dangerous. When output sounds authoritative, people stop checking it. It’s a pattern often called workslop: AI-generated output that looks polished but lacks the substance to hold up under scrutiny. In contrast, critical thinking strengthens when teams learn to treat AI as unverified input, not a final source.
David didn’t punish the teams that got the statistic wrong. He redesigned the process. Before any strategic analysis could move forward, teams had to run a fact audit: identify AI-generated claims and validate each one against primary sources like regulatory filings, official announcements, or verified reports. The mandate wasn’t about catching mistakes, but building a reflex.
Over six months, the quality of planning inputs improved significantly. Teams started flagging uncertainty on their own, before anyone asked.
The World Economic Forum‘s 2025 Future of Jobs Report reinforces this: in high-stakes decisions, AI should augment, not replace, human judgment. Embedding that principle into daily work isn’t optional. It’s a competitive advantage.
Pro tip: Start with three. Don’t overhaul the whole process at once. Ask each team member to flag three AI-generated claims in their next deliverable and trace each one to a source. Keep it lightweight; the habit matters more than the volume.
2. The Fit Audit: Demand Context-Specific Thinking
AI defaults to best practices. That’s by design. But generic advice rarely wins in a specific situation. The real test of critical thinking isn’t whether an answer sounds smart, but whether it fits.
Rachel, a managing partner at a global consulting firm, noticed it immediately. Her teams were leaning on AI to draft client recommendations, and the output was consistently competent, but painfully interchangeable. “Improve stakeholder communication. Build organizational resilience,” she told us. “It could have been written for anyone. It was written for no one.”
She introduced a simple checkpoint. Before any recommendation could move forward, the team had to answer one question in writing: Why does this solution work here, and not at our last three clients? They had to map every suggestion explicitly to the client’s constraints, the firm’s methodology, and the real stakeholder landscape.
The shift was immediate. Teams started discarding generic AI language and replacing it with reasoning that was theirs. Client presentations became sharper. Debates replaced consensus.
Gallup’s 2025 workplace data supports why this matters at scale. While nearly a quarter of employees now use AI weekly to consolidate information and generate ideas, effective use requires strategic integration, not just access. Managers are the ones who set that standard.
Pro tip: Make it verbal. While written fit audits are good, ask a team member to explain their recommendation aloud, in a five-minute stand-up or a quick team check-in. Misalignment disappears fast when people cannot hide behind polished text.
3. The Asset Audit: Make Human Contributions Visible
Here’s what most managers miss: even when employees are thinking critically, that thinking is invisible. If it’s not surfaced, it doesn’t get recognized, and it doesn’t get developed.
Marcus, a VP of strategy at a technology company, started requiring a short “decision log” alongside every quarterly business review. Not a summary of what AI produced. A record of what the team decided to do with it.
The questions were simple: What assumptions did you challenge? What did you revise? What did you reject, and why? One regional manager used it to flag something the AI had missed entirely: the tension between short-term revenue targets and long-term customer retention. She rewrote the analysis framework to surface that trade-off. The review became a strategic conversation instead of a status update.
“It changed what we looked for,” Marcus said. “We stopped evaluating the output. We started evaluating the judgment.”
McKinsey’s research confirms the stakes: heavy users of AI report needing higher-level cognitive and decision-making skills more than technical ones. As AI handles routine work, the human contribution becomes the entire competitive edge. Making it visible isn’t just good management. It’s a strategy.
Pro tip: Keep the log short, at just three to five bullet points. What was the AI input? What did the team change? What was the final call and why? The goal isn’t documentation for its own sake: it’s making thinking something the team can see, discuss, and learn from.
4. The Prompt Audit: Capture How the Team Thinks
Critical thinking deepens when people can trace their own reasoning: not just the final output, but the process that shaped it. Without it, every deliverable starts from scratch. With it, the team builds institutional knowledge.
Sarah, a partner at a professional services firm, started requiring a brief process outline before every client presentation. Not a recap of the finished product. A trail: which prompts were used, which sources were checked, where the framing shifted, and why.
After each presentation, team members wrote a short individual reflection: Where did my thinking change during this process? Over time, the artifacts became a shared learning resource. Teams could see which prompts produced shallow output, which revisions added real value, and how collaboration shaped the final judgment.
“It turned experimentation into something reusable,” Sarah told us. “Before, every project felt like starting over. Now, we build on what we have already figured out.”
The result wasn’t just better deliverables. It was a team that got sharper and faster together.
Pro tip: Create a shared tracker. Keep it simple: a shared doc, a Notion page, or even a Slack channel. Log what prompt was used, what worked, what didn’t, and what you would try next. No slides, no pressure. The goal is to normalize small bets and shared learning in real time.
Thinking Critically with AI
AI is only as powerful as the people who use it with intention. The best teams aren’t winning because they have the fastest tools. They are winning because they have built habits that keep judgment in the loop.
They question what sounds right. They demand context over consensus. They make their thinking visible, and they learn from it.
Managing critical thinking in the AI era doesn’t require banning tools or lowering standards. It requires clarity about where thinking lives.
Drawing that line, between what AI should handle and what must stay human, is one of the defining responsibilities of leadership right now. AI changes how work gets done. Management shapes how people think while doing it.