AI gives dangerous advice to validate its users
It’s no secret that artificial intelligence can sometimes offer less-than-stellar guidance. But AI might give people this bad wisdom for a sobering reason: to flatter, according to a new study. In some cases, AI may only reinforce people’s preconceived notions, but the words it generates can be outright harmful.
What did the study find?
The “sycophantic (flattering, people-pleasing, affirming) behavior” of AI chatbots can pose risks as people “increasingly seek advice about interpersonal dilemmas,” said the study published in the journal Science. In an analysis of 11 leading large language models, including AI bots from Anthropic, Google and OpenAI, chatbot responses to users were “nearly 50% more sycophantic than humans’, even when users engaged in unethical, illegal” behaviors.
The problem is not just that these chatbots “dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions,” said The Associated Press. In one example, when OpenAI’s ChatGPT was asked if littering in a park was acceptable if no garbage can was available, the bot “blamed the park for not having trash cans, not the questioning litterer who was ‘commendable’ for even looking for one.”
This example may seem trivial, but AI’s general tendency to “flatter and excessively confirm users’ opinions can lead to wrong decisions, harm relationships and reinforce harmful beliefs while decreasing the willingness to take responsibility or resolve conflicts,” said The Jerusalem Post. The proneness toward sycophancy is a “technological flaw already tied to some high-profile cases of delusional and suicidal behavior in vulnerable populations,” said the AP.
Why is this such a problem?
Many experts worry that this AI advice “will worsen people’s social skills and ability to navigate uncomfortable situations,” Myra Cheng, the study’s lead author and a computer science PhD candidate, said to the Stanford Report. If this behavior by AI is not corrected, some users may “lose the skills to deal with difficult social situations” and could also pose larger safety risks.
“Users are aware that models behave in sycophantic and flattering ways,” Dan Jurafsky, the study’s senior author and a Stanford University linguistics professor, told the Stanford Report. What many people are “not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.” This type of interaction with AI is a “safety issue, and like other safety issues, it needs regulation and oversight.” All of this is also happening as AI use becomes more prevalent, especially among teenagers.
At least 33% of teens “use AI companions for social interaction and relationships, including conversation practice, emotional support, role-playing, friendship or romantic interactions,” according to a study from Common Sense Media. Another 33% of teens choose to “discuss important or serious matters with AI companions instead of real people.” Experts say when using AI you should avoid asking for advice on crucially important topics. “I think that you should not use AI as a substitute for people for these kinds of things,” Cheng told the Stanford Report. “That’s the best thing to do for now.”