{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

Silicon Valley Is in a Frenzy Over Bots That Build Themselves

Late last month, a large crowd gathered in downtown San Francisco to demand that the AI industry stop developing more powerful bots. Holding signs and banners reading Stop the AI Race and Don’t Build Skynet, the protesters marched through the city and gave speeches outside the offices of Anthropic, OpenAI, and xAI. The crowd demanded that these companies halt efforts to create superintelligent machines—and, in particular, AI models that can develop future AI models. Such a technology, attendees said, could extinguish all human life.

At AI protests and happy hours, inside start-ups and major companies, the tech world is in a frenzy over the same thing: Computers that make themselves smarter. Over the past year, the top AI companies have taken to loudly bragging about internal efforts to automate their own research. OpenAI recently released a new model it described as “instrumental in creating itself.” Within the next six months, the company aims to debut what it has described as an “intern-level AI research assistant.” Meanwhile, Anthropic says that as much as 90 percent of the company’s code is already written by Claude.

“We are starting to see AI progress feed back on itself,” Nick Bostrom, an influential Swedish philosopher who studies AI risk, told us. Within Silicon Valley, many insiders believe that we are teetering on the precipice of a world in which AI can rapidly improve its own capabilities. Instead of waiting for months between new machine-learning breakthroughs, we might wait weeks. Imagine AI advancing faster and faster.

The idea of self-improving bots is nothing new. When the statistician I. J. Good first introduced the concept of recursive self-improvement in the 1960s, he wrote that machines capable of training their own, even more capable successors would be “the last invention” society ever needed to make. But just a few years ago, any notion of actually making such AI models was on the back burner. When ChatGPT couldn’t reliably add and subtract, let alone search the web, the notion that AI programs would soon be able to do world-class machine-learning research seemed laughable. Even as tech companies made claims about the imminent arrival of “artificial general intelligence,” the capabilities needed for a bot to accelerate or even direct AI research seemed to exceed those of AGI.

[Read: Do you feel the AGI yet?]

Now, as AI models have become significantly better at coding, Silicon Valley has become hooked on the idea of self-improving machines. AI research involves a lot of grunt work—curating large data sets, running repeated experiments—that can be made more efficient with the help of coding bots. Dario Amodei, Anthropic’s CEO, has estimated that coding tools speed up his company’s overall workflows by 15 to 20 percent.

But the information that top AI firms share about how and the extent to which they have automated internal research is patchy at best. When Anthropic says that Claude writes almost all of its code, we don’t know how much human supervision was required. (An Anthropic spokesperson declined a request for an interview, but pointed us to a recent podcast in which Jack Clark, the company’s head of policy, said one of his biggest priorities this year is to better understand “the extent to which we are automating aspects of A.I. development.”) There are also few details about OpenAI’s forthcoming AI “intern.”

A company spokesperson described it to us as a system that could contribute to research workflows by, for instance, conducting literature reviews or interpreting results of experiments. (The Atlantic has a corporate partnership with OpenAI.) One concrete example of how AI is being used to automate research comes from Google DeepMind: Last year, the company developed an AI coding agent called AlphaEvolve, which according to research published by the firm was able to make Google’s global data-center fleet 0.7 percent more computationally efficient on average and cut the overall training time of Gemini by 1 percent.

[Read: AI agents are taking America by storm]

All of these current approaches to self-improving AI are not recursive but piecemeal. AI tools can write code, find small optimizations, and generally make discrete parts of the AI research process faster. It’s impressive that machines are able to at least incrementally improve their own abilities, but right now humans still play an essential role. AI research has many components: curating training data, proposing new hypotheses, setting up experiments to test them, and deciding how to allocate scarce computing resources. Eventually, the thinking goes, recursively self-improving AI models will make the leap from rote programming to having real research “taste”—as AI insiders call the mix of human creativity and judgment exhibited by top software engineers. Instead of humans coming up with ideas for new experiments, the bots will do this themselves.

Many AI boosters and doomers alike believe that we’re not far from that future. Sam Altman says that by 2028, OpenAI plans to have developed a fully “automated AI researcher.” By then, “we are pretty confident we will have systems that can make more significant discoveries,” the company said in a recent blog post. Based on the speed of recent advances in AI, Eli Lifland, a researcher at the AI Futures Project, has forecast that AI research and development could be fully automated by 2032. After all, a few years ago, top models could successfully do only things that would take a human developer seconds; now they autonomously complete tasks that would take humans hours. “I don’t expect a reason for it to slow down,” Neev Parikh, a researcher at METR, a nonprofit that studies AI coding capabilities, told us.

There are plenty of reasons to be skeptical that AI research will be fully automated over such a short time horizon. Coding bots are designed to execute directions, but developing an AI with research taste might require some kind of transformative breakthrough. Not to mention the various constraints on AI development—including the availability of funding, chips, and energy for data centers—that threaten to stall progress at any time. For now, “the overall pipeline to realize this self-improvement loop is still yet to be developed,” Pushmeet Kohli, DeepMind’s vice president of science and strategic initiatives, told us. A bot can optimize things, but it doesn’t “have anything to optimize for,” Kohli said. “That’s where the human comes in.”

[Read: Inside the dirty, dystopian world of AI data centers]

Ultimately, even if the most fantastical dreams of recursive self-improvement turn out to be little more than a marketing ploy, marginal improvements in automating research are likely to further accelerate the pace of AI development. “This could change the dynamics of AI competition, alter AI geopolitics, and much more,” Dean Ball, a former Trump adviser on AI, recently wrote. Governments and civil society are already lagging. American institutions are in many ways still adapting to the internet—the IRS still processes tax returns using COBOL, a programming language that was released in 1960. Should AI models progress faster, public policy, including regulations on safety and security, has even less hope of keeping up. Bostrom, the philosopher, expressed a sort of resignation about the AI future when we spoke. He used to call himself a “fretful optimist,” he said, but now he’s a “moderate fatalist.”

In a strange way, none of the predictions about recursive self-improvement needs to be true for them to matter. Last year, a team of academics interviewed 25 leading researchers at DeepMind, OpenAI, Anthropic, Meta, UC Berkeley, Princeton, and Stanford. Twenty of them identified the automation of AI research as among the industry’s “most severe and urgent” risks. Now these dramatic warnings are gaining a growing audience. “Human beings could actually lose control over the planet,” Senator Bernie Sanders recently warned Congress, sounding just like the San Francisco protesters. Yet again, the AI industry has found a way to ratchet up the hype behind its technology.

Ria.city






Read also

Stunning number of Republicans are quitting amid Trump's cratering polls: analysis

Israel strikes Beirut, US warns Iran may hit Lebanese universities

Lawrence Jones reveals why a new generation is returning to God in Fox Nation's ‘Revival’

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости