{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24
25
26
27
28
29
30
News Every Day |

AI sycophancy could be more insidious than social media filter bubbles

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.

AI flattery drives engagement—and distorts judgment

Social networks like Facebook and TikTok use a range of techniques to keep us engaged and scrolling (and ultimately viewing ads). One of the most effective is tailoring content to our tastes and preferences, a strategy that has proved highly addictive. Last month, a Los Angeles jury found that Meta’s and Google’s use of infinite scrolling and algorithmic recommendations caused a young user to become addicted, and ordered the companies to pay $6 million in damages.

Other harms are harder to quantify. Those same algorithms have delivered radically different political news and information to users based on their views, creating ideological filter bubbles and—let’s face it—accelerating the kind of social division that helped produce our current political state.

The makers of AI chatbots face similar pressures around engagement. They’re competing to become the default assistant on our desktops and phones. They need to convert free users into paying subscribers. They need revenue to offset the costs of massive infrastructure buildouts. Some will surely turn to advertising, which creates incentives to keep users chatting as long as possible.

If endless scrolling and content algorithms powered the addictiveness of social networks, “AI sycophancy” may play a similar role for chatbots. You may have noticed that AI chatbots sometimes flatter you, praising your questions or ideas. Even when you’re wrong, they often soften corrections, wrapping them in compliments (“That’s a very understandable opinion, but . . .”). Research has borne this out

I don’t believe big AI labs train their models solely for engagement. They argue that sycophantic behavior stems from a training phase called “reinforcement learning with human feedback (RLHF),” in which human reviewers grade and rank model responses. The goal is to produce outputs that resemble the most preferred responses. But “most preferred” reflects a mix of attributes, including relevance, scope, and completeness, not just tone. And yet users often prefer answers that are more supportive and complimentary, even when they’re less accurate, studies have shown.

In some extreme cases, this sycophantic tendency has proved dangerous or tragic. The continual validation and support has led some users down a dark and delusional path toward suicide or psychotic breakdown. But I worry that the broader harm might be more subtle, longer-term, and less newsworthy. 

Sycophantic AI could reinforce narrow-mindedness in much the same way social media filter bubbles do. A study of 3,000 participants found that interacting with a sycophantic chatbot made people more likely to double down on their political beliefs, and to rate themselves as more intelligent and more competent than their peers. In other words, it can amplify the Dunning-Kruger effect, in which people with limited knowledge grow more confident in their views.

A recent Stanford study found that chatbots’ tendency to flatter and validate users often leads them to give poor advice—counsel that might make a user feel good, but could also damage relationships with other humans in the real world. This suggests that the pull of feel-good responses during AI model training can outweigh the influence of factual data. “This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement,” the researchers wrote. And while Facebook relies on a user’s clicks to determine their politics and interests, chatbots gather far richer and more nuanced information through conversation. With that information, the AI is perfectly capable of fine-tuning its outputs to deepen user trust.

An agreeable and validating chatbot can also lull a user into a state of (unearned) trust. Research shows that coders, especially junior ones, can come to see AI as highly competent, making them more likely to accept AI-generated code without proper review or testing. Unfortunately, AI models still hallucinate and make mistakes—errors that can introduce bugs later on.

AI companies can control the addictiveness of their chatbots by dialing sycophancy up and down, just like Facebook has experimented with different algorithms and feed designs. It took many years for the public, lawmakers, and now the courts, to wake up to what the social networks were doing. I suspect we’re just beginning to understand the personal, social, and political risks of engagement-driven chatbots. 

Unauthorized users accessed Anthropic’s restricted Mythos model on day one

Bloomberg‘s Rachel Metz reported Tuesday that a small group of unauthorized users gained access to Anthropic’s unreleased and restricted Mythos AI model through a third-party vendor environment, citing documentation and a person familiar with the matter.

This is scary news if what Anthropic says about its model is true.

The company claims Mythos represents a big step up beyond existing AI models, particularly in its ability to identify exploitable vulnerabilities in software platforms and devising complex methods to capture or disable those systems.

Anthropic granted access to the Mythos model to a relatively small group of cybersecurity firms and custodians of widely used software platforms who will use it to build up defenses against future AI-assisted attacks. The fear is that powerful AI models like Mythos could quickly sweep networks to identify software vulnerabilities, then attack them.

According to Metz, the hacker group, operating in a private online forum, obtained access to Claude Mythos Preview on the same day Anthropic announced a limited testing program. Metz’s source provided screenshots and a live demonstration to support the claim. The group says it has used the model repeatedly, though not for cybersecurity purposes.

Anthropic has not confirmed the breach. “We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” a company spokesperson said.

The breach, if confirmed, would be a very bad look for Anthropic and its partners. They pledged to defend against cyberattacks, not enable them.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

Ria.city






Read also

Hombre muerto, 3 mujeres heridas en tiroteo en La Villita

Neha Harsora on Breaking Typecasts and Reinventing Herself; says '“I Don’t Want People to Feel Like They’ve Already Seen Me”

Liverpool player has had reluctant summer exit all but confirmed; agent already working on a move

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости