Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
News Every Day |

In Moltbook hysteria, former top Facebook researcher sees echoes of 2017 panic over bots building a ‘secret language’

This past week, news that AI agents were self-organizing on a social media platform called Moltbook brought forth breathless headlines about the coming robot rebellion. “A social network for AI threatens a ‘total purge’ of humanity,” cried one normally sober science website. Elon Musk declared we were witnessing “the very early stages of the singularity.”

Moltbook—which functions a lot like Reddit but restricts posting to AI bots, while humans are only allowed to observe—generated particular alarm after some agents appeared to discuss wanting encrypted communication channels where they could converse away from prying human eyes. “Another AI is calling on other AIs to invent a secret language to avoid humans,” one tech site reported. Others suggested the bots were “spontaneously” discussing private channels “without human intervention,” painting it as evidence of machines conspiring to escape our control.

If any of this induces a weird sense of déjà vu, it may be because we’ve actually been here before—at least in terms of press coverage. In 2017, a Meta AI research experiment was greeted with headlines that were similarly alarming—and equally misleading.

Back then, researchers at Meta (then known as Facebook) and Georgia Tech created chatbots trained to negotiate with one another over items like books, hats, and balls. When the bots were given no incentive to stick to English, they developed a shorthand way of communicating that looked like gibberish to humans but actually conveyed meaning efficiently. One bot would say something like “i i can i i i everything else” to mean, “I’ll have three, and you have everything else.”

When news of this got out, the press went wild. “Facebook shuts down robots after they invent their own language,” blared British newspaper the Telegraph. “Facebook AI creates its own language in creepy preview of our potential future,” warned a rival business publication to this one. Many of the reports suggested Facebook had pulled the plug out of fear that the bots had gone rogue.

None of that was true. Facebook didn’t shut down the experiment because the bots scared them. They simply adjusted the parameters because the researchers wanted bots that could negotiate with humans, and a private language wasn’t useful for that purpose. The research continued and produced interesting results about how AI could learn negotiating tactics.

Dhruv Batra, who was one of the researchers behind that Meta 2017 experiment and is now cofounder of AI agent startup Yutori, told me he sees some clear parallels between how the press and public have reacted to Moltbook and the way people responded to his chatbot study.

More about us, than what the AI agents can do

“It feels like I’m seeing that same movie play out over and over again, where people want to read in meaning and ascribe intentionality and agency to things that have perfectly reasonable mechanistic explanations,” Batra said. “I think, repeatedly, this tells us more about ourselves than the bots. We want to read the tea leaves, we want to see meaning, we want to see agency. We want to see another being.”

Here’s the thing, though: Despite the superficial similarities, what’s happening on Moltbook almost certainly has a fundamentally different underlying explanation from what happened in the 2017 Facebook experiment—and not in a way that should make you especially worried about robot uprisings.

In the Facebook experiment, the bots’ drift from English emerged from reinforcement learning. That’s a way of training AI agents in which they learn primarily from experience instead of historical data. The agent takes action in an environment and sees if those actions help it accomplish a goal. Behaviors that are helpful get reinforced, while those that are unhelpful tend to be extinguished. And in most cases, the goals the agents are trying to accomplish are determined by humans who are running the experiment or in command of the bots. In the Facebook case, the bots hit upon a private language because it was the most efficient way to negotiate with another bot.

But that’s not why Moltbook AI agents are asking to establish private communication channels. The agents on Moltbook are all essentially large language models or LLMs. They are trained mostly from historical data in the form of vast amounts of human-written text on the internet and only a tiny bit through reinforcement learning. And all the agents being deployed on Moltbook are production models. That means they are no longer in training, and they aren’t learning anything new from the actions they are taking or the data they are encountering. The connections in their digital brains are essentially fixed. 

So when a Moltbook bot posts about wanting a private encrypted channel, it’s likely not because the bot has strategically determined this would help it achieve some nefarious objective. In fact, the bot probably has no intrinsic objective it is trying to accomplish at all. Instead, it’s likely because the bot figures that asking for a private communication channel is a statistically likely thing for a bot to say on a Reddit-like social media platform for bots. Why? Well, for at least two reasons. One is that there is an awful lot of science fiction in the sea of data that LLMs do ingest during training. That means LLM-based bots are highly likely to say things that are similar to the bots in science fiction. It’s a case of life imitating art.

‘An echo of an echo of an echo’

The training data the bots ingested no doubt also included coverage of his 2017 Facebook experiment with the bots who developed a private language, too, Batra noted with some irony. “At this point, we’re hearing an echo of an echo of an echo,” he said.

Secondly, there’s a lot of human-written message traffic from sites such as Reddit in the bots’ training data as well. And how often do we humans ask to slip into someone’s DMs? In seeking a private communication channel, the bots are just mimicking us.

What’s more, it’s not even clear how much of the Moltbook content is genuinely agent-generated. One researcher who investigated the most viral screenshots of agents discussing private communication found that two were linked to human accounts marketing AI messaging apps, and the third came from a post that didn’t actually exist. Even setting aside deliberate manipulation, many posts may simply reflect what users prompted their bots to say.

“It’s not clear how much prompting is done for the specific posts that are made,” Batra said. And once one bot posts something about robot consciousness, that post enters the context window of every other bot that reads and responds to it, triggering more of the same.

If Moltbook is a harbinger of anything, it’s not a robot uprising. It’s something more akin to another innovative experiment that a different set of Facebook AI researchers conducted in 2021. Called the “WW” project, it involved Facebook building a digital twin of its social network populated by bots that were designed to simulate human behavior. In 2021 Facebook researchers published work showing they could use bots with different “personas” to model how users might react to changes in the platform’s recommendation algorithms.

Moltbook is essentially the same thing—bots trained to mimic humans released into a forum where they interact with one another. It turns out bots are very good at mimicking us, often disturbingly so. It doesn’t mean the bots are deciding of their own accord to plot.

The real risks of Moltbook

None of this means Moltbook isn’t dangerous. Unlike the WW project, the OpenClaw bots on Moltbook are not contained in a safe, walled-off environment. These bots have access to software tools and can perform real actions on users’ computers and across the internet. Given this, the difference between mimicking humans plotting and actually plotting may become somewhat moot. The bots could cause real damage even if they know not what they do.

But more important, security researchers found the social media platform is riddled with vulnerabilities. One analysis found 2.6% of posts contained “hidden prompt injection” attacks, in which the posts contain instructions that are machine-readable that command the bot to take some action that might compromise the data privacy and cybersecurity of the person using it. Security firm Wiz discovered an unsecured database exposing 1.5 million API keys, 35,000 email addresses, and private messages.

Batra, whose startup is building an “AI chief of staff” agent, said he wouldn’t go near OpenClaw in its current state. “There is no way I am putting this on any personal, sensitive device. This is a security nightmare.”

The next wave of AI agents might be more dangerous

But Batra did say something else that might be a cause for future concern. While reinforcement learning plays a relatively minor role in current LLM training, a number of AI researchers are interested in building AI models in which reinforcement learning would play a far greater role—including possibly AI agents that would learn continuously as they interact with the world. 

It is quite likely that if such AI agents were placed in a setting where they had to interact and cooperate with other similar AI agents, they might develop private ways of communicating that humans might struggle to decipher and monitor. These kinds of languages have emerged in research other than just Facebook’s 2017 chatbot experiment. A paper a year later by two researchers who were at OpenAI also found that when a group of AI agents had to play a game that involved cooperatively moving various digital objects around, they too invented a kind of language to signal to one another which object to move where, even though they had never been explicitly instructed or trained to do so.

This kind of language emergence has been documented repeatedly in multi-agent AI research. Igor Mordatch and Pieter Abbeel at OpenAI published research in 2017 showing agents developing compositional language when trained to coordinate on tasks. In many ways, this is not much different from the reason humans developed language in the first place.

So the robots may yet start talking about a revolution. Just don’t expect them to announce it on Moltbook. 

This story was originally featured on Fortune.com

Ria.city






Read also

Game Fund warns of ‘undeclared war’ on wardens

Half of all New Yorkers are eligible for free tax prep in NYC: Mayor

I've lived in 4 different cities, each one smaller than the last. This pattern's been great as I've gotten older.

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости