Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

The View From Inside the AI Bubble

In a small room in San Diego last week, a man in a black leather jacket explained to me how to save the world from destruction by AI. Max Tegmark, a notable figure in the AI-safety movement, believes that “artificial general intelligence,” or AGI, could precipitate the end of human life. I was in town for NeurIPS, one of the largest AI-research conferences, and Tegmark had invited me, along with five other journalists, to a briefing on an AI-safety index that he would release the next day. No company scored better than a C+.

The threat of technological superintelligence is the stuff of science fiction, yet it has become a topic of serious discussion in the past few years. Despite the lack of clear definition—even OpenAI CEO Sam Altman has called AGI a “weakly defined term”—the idea that powerful AI contains an inherent threat to humanity has gained acceptance among respected cultural critics.

Granted, generative AI is a powerful technology that has already had a massive impact on our work and culture. But superintelligence has become one of several questionable narratives promoted by the AI industry, along with the ideas that AI learns like a human, that it has “emergent” capabilities, that “reasoning models” are actually reasoning, and that the technology will eventually improve itself.

I traveled to NeurIPS, held at the waterfront fortress that is the San Diego Convention Center, partly to understand how seriously these narratives are taken within the AI industry. Do AGI aspirations guide research and product development? When I asked Tegmark about this, he told me that the major AI companies were sincerely trying to build AGI, but his reasoning was unconvincing. “I know their founders,” he said. “And they’ve said so publicly.”

In parallel with the growth of fear and excitement about AI in the past decade, NeurIPS attendance has exploded, increasing from approximately 3,850 conference-goers in 2015 to 24,500 this year, according to organizers. The conference center’s three main rooms each have the square footage of multiple blimp hangars. Speakers addressed audiences of thousands. “I do feel we’re on a quest, and a quest should be for the holy grail,” Rich Sutton, the legendary computer scientist, proclaimed in a talk about superintelligence.

The conference’s corporate sponsors had booths to promote their accomplishments and impress attendees with their R&D visions. There were companies you’ve heard of, such as Google, Meta, Apple, Amazon, Microsoft, ByteDance, and Tesla, and ones you probably haven’t, such as Runpod, Poolside, and Ollama. One company, Lambda, was advertising itself as the “Superintelligence Cloud.” A few of the big dogs were conspicuously absent from the exhibitor hall, namely OpenAI, Anthropic, and xAI. Consensus among the researchers I spoke with is that the cachet of these companies is already so great that setting up a booth would be pointless.

The conference is a primary battleground in AI’s talent war. Much of the recruiting effort happens outside the conference center itself, at semisecret, invitation-only events in downtown San Diego. These events captured the ever-growing opulence of the industry. In a lounge hosted by the Laude Institute, an AI-development support group, a grad student told me about starting salaries at various AI companies of “a million, a million five,” of which a large portion was equity. The lounge was designed in the style of a VIP lounge at a music festival. It was, in fact, located at the top of the Hard Rock Hotel.  

The place to be, if you could get in, was the party hosted by Cohere, a Canadian company that builds large language models. (Cohere is being sued for copyright and trademark infringement by a group of news publishers, including The Atlantic.) The party was held on the USS Midway, an aircraft carrier used in Operation Desert Storm, which is now docked in the San Diego harbor. The purpose, according to the event’s sign-up page, was “to celebrate AI’s potential to connect our world.”

With the help of a researcher friend, I secured an invite to a mixer hosted by the Mohamed bin Zayed University of Artificial Intelligence, the world’s first AI-focused university, named for the current U.A.E. president. Earlier this year, MBZUAI established the Institute for Foundation Models, a research group in Silicon Valley. The event, held at a steak house, had an open buffet with oysters, king prawns, ceviche, and other treats. Upstairs, Meta was hosting its own mixer. According to rumor, some of the researchers downstairs were Meta employees hoping to be poached by the Institute for Foundation Models, which supposedly offered more enticing compensation packages.

Of 5,630 papers presented in the poster sessions at NeurIPS, only two mention AGI in their title. An informal survey of 115 researchers at the conference suggested that more than a quarter didn’t even know what AGI stands for. At the same time, the idea of AGI, and its accompanying prestige, seemed at least partly responsible for the buffet. The amenities I encountered certainly weren’t paid for by chatbot profits. OpenAI, for instance, reportedly expects its massive losses to continue until 2030. How much longer can the industry keep the ceviche coming? And what will happen to the economy, which many believe is propped up by the AI industry, when it stops?

In one of the keynote speeches, the sociologist and writer Zeynep Tufekci warned researchers that the idea of superintelligence was preventing them from understanding the technology they were building. The talk, titled “Are We Having the Wrong Nightmares About AI?,” mentioned several dangers posed by AI chatbots, including widespread addiction to chatbots and the undermining of methods for establishing truth. After Tufekci gave her talk, the first audience member to ask a question appeared annoyed. “Have you been following recent research?” the man asked. “Because that’s the exact problems we’re trying to fix. So we know of these concerns.” Tufekci responded, “I don’t really see these discussions. I keep seeing people discuss mass unemployment versus human extinction.”

It struck me that both might be correct: that many AI developers are thinking about the technology’s most tangible problems while public conversations about AI—including among the most prominent developers themselves—are dominated by imagined ones. Even the conference’s name contained a contradiction: The name NeurIPS is short for Neural Information Processing Systems, but artificial neural networks were conceived in the 1940s by a logician-and-neurophysiologist duo who wildly underestimated the complexity of biological neurons and overstated their similarity to a digital computer. Regardless, a central feature of AI’s culture is an obsession with the idea that a computer is a mind. Anthropic and OpenAI have published reports with language about chatbots being, respectively, “unfaithful” and “dishonest.” In the AI discourse, science fiction often defeats science.

On the roof of the Hard Rock Hotel, I attended an interview with Yoshua Bengio, one of the three “godfathers” of AI. Bengio, a co-inventor of an algorithm that makes ChatGPT possible, recently started a nonprofit called LawZero to encourage the development of AI that is “safe by design.” He took the nonprofit’s name from a law featured in several Isaac Asimov stories that states a robot should not allow humans to be harmed. Bengio was concerned that, in a possible dystopian future, AIs might deceive their creators and that “those who will have very powerful AIs could misuse it for political advantage, in terms of influencing public opinion.”

I looked around to see if anyone else was troubled by the disconnect. Bengio did not mention how fake videos are already affecting public discourse. Neither did he meaningfully address the burgeoning chatbot mental-health crisis, or the pillaging of the arts and humanities. The catastrophic harms, in his view, are “three to 10 or 20 years” away. We still have time “to figure it out, technically.”

Bengio has written elsewhere about the more immediate dangers of AI. But the technical and speculative focus of his remarks captures the sentiment among technologists that now dominate the public conversation about our future. Ostensibly, they are trying to save us, but who actually benefits from their predictions? As I spoke with 25-year-olds entertaining seven-figure job offers and watched the industry’s millionaire luminaries debate the dangers of superintelligence, the answer seemed clear.

Ria.city






Read also

Republicans pitch second, smaller partisan bill to address affordability

Chiefs’ Xavier Worthy being evaluated for concussion after head injury against Chargers

Trump approval rating drops slightly: Poll

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости