Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4 5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
News Every Day |

Anthropic takes aim at chatbot ads—with its own Super Bowl ad

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AIYou can sign up to receive this newsletter every week via email here.

Anthropic uses the Super Bowl to land some zingers about the future of AI

Anthropic’s Super Bowl ads are bangers. The spots, which Anthropic posted on X on Wednesday, seize on rival OpenAI’s plans to begin injecting ads into its ChatGPT chatbot for free-tier users as soon as this month. The 30-second ads dramatize what the real effects of that decision might look like for users. They never mention OpenAI or ChatGPT by name.

In one ad, a human fitness instructor playing the role of a friendly chatbot says he’ll develop a plan to give his client the six-pack abs he wants, before suddenly suggesting that “Step Boost Max” shoe inserts might be part of the solution. In another, a psychiatrist offers her young male patient some reasonable, if generic, advice on how to better communicate with his mom, then abruptly pitches him on signing up for “Golden Encounters,” the dating site where “sensitive cubs meet roaring cougars.”

The ads are funny and biting. The point, of course, is that because people use chatbots for deeply personal and consequential things, they need to trust that the answers they’re getting aren’t being shaped by a desire to please advertisers.

OpenAI CEO Sam Altman, however, was not laughing. He responded to the ads by saying his company would never run ads like the ones portrayed by Anthropic. But he didn’t stop there. He went much further. “Anthropic wants to control what people do with AI,” he wrote in a long post on X on Wednesday. “They block companies they don’t like from using their coding product (including us), they want to write the rules themselves for what people can and can’t use AI for, and now they also want to tell other companies what their business models can be.” He went on to call Anthropic an “authoritarian company.”

Anthropic, which makes its money through subscriptions and enterprise API fees, says it wants its Claude chatbot to remain a neutral tool for thinking and creating. “[O]pen a notebook, pick up a well-crafted tool, or stand in front of a clean chalkboard, and there are no ads in sight,” the company said in a blog post this week. “We think Claude should work the same way.”

By framing conversations with Claude as a “space to think” rather than a venue for ads, the company is using the Super Bowl’s massive cultural platform to question whether consumer marketing is the inevitable future of AI.

How social media lawsuits could affect AI chatbots

AI developers (and their lawyers) are closely watching a long-awaited social media addiction trial that recently kicked off in a Los Angeles courtroom. The case centers on a 20-year-old woman who alleges that platforms including Facebook and Instagram used addictive interface designs that caused her mental health problems as a minor. The suit is part of a joint proceeding involving roughly 1,600 plaintiffs accusing major tech companies of harming children. TikTok and Snap have already settled with plaintiffs, while Meta and YouTube remain the primary defendants.

While Meta has never admitted wrongdoing, internal studies, leaked documents, and unsealed court filings have repeatedly shown that Instagram uses design features associated with compulsive or addictive engagement, and that company researchers were aware of the risks to users, especially teens.

What makes the case particularly significant for the AI industry is the legal strategy behind it. Rather than suing over content, plaintiffs argue that the addictive features of recommendation algorithms constitute harmful product defects under liability law. AI chatbots share key similarities with social media platforms: they aggregate and dispense content in compelling ways and depend on monetizing user engagement. Social networks rely on complex recommendation systems to keep users scrolling and viewing ads, while AI chatbots could be seen as using a different kind of algorithm to continually deliver the right words and images to keep users prompting and chatting.

If plaintiffs succeed against Meta and YouTube, future litigants may attempt similar “addictive design” arguments against AI chatbot makers. In that context, Anthropic’s decision to exclude ads—and to publicly emphasize that choice—may help it defend itself by portraying Claude as a neutral, utilitarian tool rather than an engagement-driven “attention trap.”

No, OpenClaw doesn’t herald the arrival of sentient AI agents

Some hobbyists and journalists have gone into freakout mode after seeing or using a new AI agent called OpenClaw, formerly Clawdbot and later Moltbot. Released in November 2025, OpenClaw is an open-source autonomous AI assistant that runs locally on a user’s device. It integrates with messaging platforms like WhatsApp and Telegram to automate tasks such as calendar management and research. OpenClaw can also access and analyze email, and even make phone calls on a user’s behalf through an integration with Twilio. Because personal data never leaves the user’s device, users may feel more comfortable giving the agent greater latitude to act autonomously on more complex tasks.

One user, vibe-coding guru Alex Finn, posted a video on X of an incoming call from his AI agent. When he answered, the agent, speaking in a flat-sounding voice, asked whether any tasks were needed. Finn then asked the agent to pull up the top five YouTube videos about OpenClaw on his desktop computer and watched as the videos appeared on screen.

Things grew stranger when AI agents, including OpenClaw agents, began convening on their own online discussion forum called Moltbook. There, the agents discuss tasks and best practices, but also complain about their owners, draft manifestos, and upvote each other’s comments in threaded “submolts.” They even generated a concept album, AVALON: Between Worlds, about the identity of machines.

That behavior led some observers to conclude that the agents possess some kind of internal life. Experts were quick to clarify, however, that this is a mechanical illusion created by clever engineering. The appearance of “independence” arises because the agents are programmed to trigger reasoning cycles even when no human is prompting them or watching. Some of the more extreme behaviors, such as “rebellion” manifestos on Moltbook, were likely prompted into existence by humans, either as a joke or to generate buzz.

All of this has unfolded as the industry begins to move from the “chatbot” phase into the “agent” phase of generative AI. But the kinds of free-roaming, autonomous behaviors on display with OpenClaw are not how the largest AI companies are approaching the shift. Companies such as Google, OpenAI, and Anthropic are moving far more cautiously, avoiding splashy personal agents like “Samantha” in the movie Her and instead gradually evolving their existing chatbots toward more limited, task-specific autonomy.

In some cases, AI labs have embedded their most autonomous agent-like behaviors in AI coding tools, such as Anthropic’s Claude Code and OpenAI’s Codex. The companies have increasingly emphasized that these tools are useful for a broad range of work tasks, not just coding. For now, OpenAI is sticking with the Codex brand, while Anthropic has recently launched a streamlined version of Claude Code called CoWork, aimed at general workplace tasks.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

Ria.city






Read also

Broadway Fave Jessica Vosk Belts Out 'Real Woman' Song from New Musical 'Beaches' - Watch Now!

The Latest Attack on Christians in Nigeria Leaves 160 Dead

Too rich for assistance, too poor to get by: A single mom's struggle to afford life in New York City

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости