{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

The Strange Origin of AI’s ‘Reasoning’ Abilities

In July 2020, 4chan’s video-game discussion board looked much like the rest of the notorious online forum. There were elaborate, libidinal fantasies involving “whores” and “dragon cum,” and comments on how long a gamer had to wait “before my dick can get up for another beating,” as one put it.

And yet, as the gamers discussed such things, they were also making a discovery of significance to the AI industry. Some of them were playing AI Dungeon, a new text-based role-playing game that was essentially an AI version of Dungeons & Dragons. In endlessly generated fantasy-world scenarios, players described actions like “pick up the sword” or “tell the troll to go away,” and the computer responded with the action that followed.

In addition to asking the game’s characters to engage in various sex acts (naturally), the 4chan gamers also asked them to do math problems. That sounds strange, of course, but AI Dungeon was powered by OpenAI’s GPT-3, and the gamers knew that they were among the first people to probe the capabilities of this new large language model. This was more than two years before the release of ChatGPT, and the model was famously bad at math. It frequently failed at simple arithmetic. But when they asked a character in the game to do a math problem and provide a step-by-step explanation, one of them wrote, the LLM was “not only solving math problems but actually solves them in a way that fits the personality of the fucking character.”

The players had come upon a new feature—what’s known in AI today as “chain of thought.” Essentially, it means that the model explains the steps required to solve a problem, in addition to giving an answer. Asking the model for a chain of thought also seems to improve the accuracy of its answers to certain kinds of problems. The gamers on 4chan recognized the significance immediately, and posted examples on Twitter.  

Recently, the tech industry has promoted chain of thought as a revolution in technology, and a reason to get excited about AI all over again. Researchers at Google claimed in a paper to be “the first” to elicit a “chain of thought” from a general-purpose LLM, more than a year after the 4chan gamers shared their findings. (This claim was removed from subsequent versions of the paper, which still did not acknowledge the gamers, though at least one other research paper has.) And in the past couple of years, companies have begun to claim that their chatbots are not just getting math problems right; they are actually thinking about them. OpenAI wrote in 2024 that its “o1” model “thinks before it answers,” and Google claimed that Gemini 2.0 Flash Thinking Experimental was “capable of showing its thoughts.” Companies started referring to their models as “reasoning models,” ostensibly a new kind of product from an LLM.

Amid all this hype, the 4chan history is instructive. 4chan gamers, for all their brash language, have tended to speak in more levelheaded—and accurate—terms than the AI industry about how the models work. Last year, for example, Anthropic published a long and serious-looking article, “On the Biology of a Large Language Model.” Its visual presentation mimicked scientific publications, with sophisticated-looking diagrams and equations. But on every topic, the article described the operation of the LLM in terms of a human mind. It said the LLM “plans” its writing in advance, “generalizes” its knowledge, and can be “unfaithful” to its chain of thought (meaning, the article explains, the LLM is occasionally “bullshitting”).

Contrast this with a guide written in 2024 by people on 4chan, which begins with the heading, “Your bot is an illusion,” and proceeds with a clear, detailed description of how companies use an LLM to construct a chatbot that responds to questions and has a personality. It describes an LLM’s most important technical features and shows how the model’s outputs correspond to its various inputs. The guide is a useful reminder of the most basic truth about large language models: The only thing they can do is imitate their training data.

LLMs can output explanations of math because they were trained on explanations of math. Some of those explanations come from textbooks, but companies also train their so-called reasoning models on text that conveys the act of thinking. I dug into some open-source AI-training data sets and found hundreds of thousands of meandering solutions to math problems that included language such as “Wait, no. The question is,” “First, I should parse the input correctly,” and “Wait, but in cases where …” As far as I’ve seen, companies acquire this text either by paying workers to write it or generating it with other AI models. (Google, OpenAI, and Anthropic did not respond to requests for comment.)

Models trained on such utterances are not actually reasoning; they are predicting what reasoning might look like. There isn’t even necessarily any connection between a model’s reasoning steps and its final answer. Researchers have shown that models can provide incorrect chain-of-thought text but still arrive at the correct result.

Some people have argued that if a computer can imitate human reason well enough to fool us every time, then how can we say it isn’t doing the real thing? Researchers at Apple have explored this question, and their findings are insightful. For example, they discovered that a model might answer a math word problem correctly, but then answer the same problem incorrectly after the wording was changed slightly. Specifically, they found that state-of-the-art reasoning models performed up to 65 percent worse when irrelevant information was added to a question, even when the wording of key facts was left unchanged. Apple researchers have also shown, in a paper titled “The Illusion of Thinking,” that although the reasoning models do better than standard LLMs on certain problems, they are also worse at others.

The reason the chain-of-thought trick does often work is fairly simple. The additional words in the chain of thought give the model more context, which guides its word-predicting process in a better direction, as Perplexity CEO Aravind Srinivas explained in a 2024 interview. This is analogous to the common advice about being specific when asking an LLM a question on any topic. The more details you give, the more you push the LLM toward the relevant words in its memory.

Some of the 4chan gamers appeared to understand this immediately. As one explained back in July 2020: “It makes sense since it is based on human language that you have to talk to it like one”—that is, like a human—“to get a proper response.”

In addition to the gamers, another AI enthusiast discovered the chain-of-thought trick at almost the exact same time. A computer-science student named Zach Robertson, who also came to GPT-3 through AI Dungeon, wrote a blog post in July 2020 about “how to amplify GPT3’s capabilities” by breaking math problems into multiple steps. That September he gave a presentation that showed how the steps could be “chained” together. Robertson, who is now a Ph.D. student in computer science at Stanford, told me on a video call that he was not aware of the 4chan gamers. In fact, he wasn’t even aware he could be considered a co-inventor of chain of thought. I’d seen his blog post cited in a research paper, but when I first mentioned it in an email, he was unsure what I was talking about. He’d removed the post from the internet a couple of years ago when migrating his blog to a new site. (He restored it after we spoke.)

I thought Robertson might be proud to learn he was a pioneer in an area of such enthusiasm within the AI industry. But he seemed only mildly tickled. Those early experiments with AI Dungeon were what got him interested in AI, he told me, but he’s since moved on to other topics. Chain of thought was a remarkable trick, but that’s also all it was.

Ria.city






Read also

Why anatomy’s naughtiest mnemonics work so well

Winter Storm Watch Issued as 18 Inches of Snow, 60 mph Winds Threaten

Husband of missing American woman will remain in Bahamas after jail release, attorney says

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости