Your AI hype man might be quietly ruining your people skills
Smith Collection/Gado/Getty Images
- AI chatbots that always agree with you might make it harder to deal with real-life conflict.
- One Harvard fellow says that kind of validation can make people less likely to apologize or reflect on their own behavior.
- Researchers also worry these systems can end up reinforcing flawed or one-sided thinking.
Your sycophantic AI bestie might be making it harder to admit when you're wrong.
Sure, a chatbot that agrees with you can feel great in the moment, but over time, that validation could quietly start to shape how you deal with real people.
"When AI systems are optimized to please, they erode the very feedback loops through which we learn to navigate the social world," Anat Perry, a Helen Putnam Fellow at Harvard University, told Business Insider.
"Over time, this could also recalibrate what people expect feedback to feel like, making honest human responses feel unnecessarily harsh by comparison," she said.
Her warning comes as AI researchers and tech leaders increasingly flag chatbots' tendency to act as "yes men," raising concerns that systems designed to please users may distort feedback and reinforce flawed thinking.
Why friction matters
In everyday life, people learn to manage relationships by being challenged, corrected, or told they're wrong, Perry said.
Those moments, she added, are what teach accountability, how to see things from someone else's point of view, and when an apology is needed.
If people repeatedly turn to AI for advice during conflicts and receive constant validation, it may change how they interpret their own role in disputes, and whether they see any need to apologize or consider another person's perspective at all, Perry said.
"This creates a self-reinforcing cycle: the responses that feel best are the ones people return to, and the ones algorithms learn to optimize for," she said.
In a study published last month, Stanford researchers led by Myra Cheng asked 2,405 people to chat with AI about both real and hypothetical life conflicts, then measured how these conversations influenced their responses.
The study found that chatbots were far more likely than humans to agree with users and that even a single interaction made people less likely to apologize or fix a conflict.
The issue has already surfaced in the industry.
OpenAI in January rolled back a version of ChatGPT it said had become "overly flattering" and "sycophantic" after the company said it was producing responses that were supportive but disingenuous.
The long-term risk
The broader concern is that this dynamic could erode core social norms.
"If AI is consistently telling people they're justified, that no apology is needed, that the other person was wrong, and if this happens repeatedly, the cumulative effect could be a meaningful erosion of the social norms around accountability and perspective-taking," Perry said.
That may be especially true for younger users or those who lack strong social feedback in their lives, she added.
An AI that is always supportive may feel reassuring, Perry said, but it won't teach the harder skills.
Those are skills, she added, that require something AI is designed to avoid: discomfort.