March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025
1 2 3 4 5 6 7 8 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

How OpenAI’s Sam Altman Is Thinking About AGI and Superintelligence in 2025

OpenAI CEO Sam Altman recently published a post on his personal blog reflecting on AI progress and his predictions for how the technology will impact humanity’s future. “We are now confident we know how to build AGI [artificial general intelligence] as we have traditionally understood it,”Altman wrote. He added that OpenAI, the company behind ChatGPT, is beginning to turn its attention to superintelligence.

[time-brightcove not-tgx=”true”]

While there is no universally accepted definition for AGI, OpenAI has historically defined it as “a highly autonomous system that outperforms humans at most economically valuable work.” Although AI systems already outperform humans in narrow domains, such as chess, the key to AGI is generality. Such a system would be able to, for example, manage a complex coding project from start to finish, draw on insights from biology to solve engineering problems, or write a Pulitzer-worthy novel. OpenAI says its mission is to “ensure that AGI benefits all of humanity.”

Altman indicated in his post that advances in the technology could lead to more noticeable adoption of AI in the workplace in the coming year, in the form of AI agents—autonomous systems that can perform specific tasks without human intervention, potentially taking actions for days at a time. “In 2025, we may see the first AI agents ‘join the workforce⁠’⁠ and materially change the output of companies,” hewrote.

In a recent interview with Bloomberg, Altman said he thinks “AGI will probably get developed during [Trump’s] term,” while noting his belief that AGI “has become a very sloppy term.” Competitors also think AGI is close: Elon Musk, a co-founder of OpenAI, who runs AI startup xAI, and Dario Amodei, CEO of Anthropic, have both said they think AI systems could outsmart humans by 2026. In the largest survey of AI researchers to date, which included over 2,700 participants, researchers collectively estimated there is a 10% chance that AI systems can outperform humans on most tasks by 2027, assuming science continues progressing without interruption.

Others are more skeptical. Gary Marcus, a prominent AI commentator, disagrees with Altman that AGI is “basically a solved problem,” while Mustafa Suleyman, CEO of Microsoft AI, has said, regarding whether AGI can be achieved on today’s hardware,“the uncertainty around this is so high, that any categorical declarations just feel sort of ungrounded to me and over the top,” citing challenges in robotics as one cause for his skepticism. 

Microsoft and OpenAI, which have had a partnership since 2019, also have a financial definition of AGI. Microsoft is OpenAI’s exclusive cloud provider and largest backer, having invested over $13 billion in the company to date. The companies have an agreement that Microsoft will lose access to OpenAI’s models once AGI is achieved. Under this agreement, which has not been publicly disclosed, AGI is reportedly defined as being achieved when an AI system is capable of generating the maximum total profits to which its earliest investors are entitled: a figure that currently sits at $100 billion. Ultimately, however, the declaration of “sufficient AGI” remains at the “reasonable discretion” of OpenAI’s board, according to a report in The Information.

At present, OpenAI is a long way from profitability. The company currently loses billions annually and it has reportedly projected that its annual losses could triple to $14 billion by 2026. It does not expect to turn its first profit until 2029, when it expects its annual revenue could reach $100 billion. Even the company’s latest plan, ChatGPT Pro, which costs $200 per month and gives users access to the company’s most advanced models, is losing money, Altman wrote in a post on X. Although Altman didn’t explicitly say why the company is losing money, running AI models is very cost intensive, requiring investments in data centers and electricity to provide the necessary computing power.  

Pursuit of superintelligence

OpenAI has said that AGI “could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.” But recent comments from Altman have been somewhat more subdued. “My guess is we will hit AGI sooner than most people in the world think and it will matter much less,” he said in December. “AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call superintelligence.”

In his most recent post, Altman wrote, “We are beginning to turn our aim beyond [AGI], to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future.”

He added that “superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.” This ability to accelerate scientific discovery is a key distinguishing factor between AGI and superintelligence, at least for Altman, who has previously written that “it is possible that we will have superintelligence in a few thousand days.”

The concept of superintelligence was popularized by philosopher Nick Bostrom, who in 2014 wrote a best-selling bookSuperintelligence: Paths, Dangers, Strategies—that Altman has called “the best thing [he’s] seen on the topic.” Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”—like AGI, but more. “The first AGI will be just a point along a continuum of intelligence”, OpenAI said in a 2023 blog post. “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.”

These harms are inextricable from the idea of superintelligence, because experts do not currently know how to align these hypothetical systems with human values. Both AGI and superintelligent systems could cause harm, not necessarily due to malicious intent, but simply because humans are unable to adequately specify what they want the system to do. As professor Stuart Russell told TIME in 2024, the concern is that “what seem to be reasonable goals, such as fixing climate change, lead to catastrophic consequences, such as eliminating the human race as a way to fix climate change.” In his 2015 essay, Altman wrote that “development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”

Read More: New Tests Reveal AI’s Capacity for Deception 

OpenAI has previously written that it doesn’t know “how to reliably steer and control superhuman AI systems.” The team created to lead work on steering superintelligent systems for the safety of humans was disbanded last year, after both its co-leads left the company. At the time, one of the co-leads, Jan Leike, wrote on X that “over the past years, safety culture and processes have taken a backseat to shiny products.” At present, the company has three safety bodies: an internal safety advisory group, a safety and security committee, which is part of the board, and the deployment safety board, which has members from both OpenAI and Microsoft, and approves the deployment of models above a certain capability level. Altman has said they are working to streamline their safety processes.

Read More: AI Models Are Getting Smarter. New Tests Are Racing to Catch Up

When asked on X whether he thinks the public should be asked if they want superintelligence, Altman replied: “yes i really do; i hope we can start a lot more public debate very soon about how to approach this.” OpenAI has previously emphasized that the company’s mission is to build AGI, not superintelligence, but Altman’s recent post suggests that stance might have shifted.

Discussing the risks from AI in the recent Bloomberg interview, Altman said he still expects “that on cybersecurity and bio stuff, we’ll see serious, or potentially serious, short-term issues that need mitigation,” and that long term risks are harder to imagine precisely. “I can simultaneously think that these risks are real and also believe that the only way to appropriately address them is to ship product and learn,” he said.

Learnings from his brief ouster

Reflecting on recent years, Altman wrote that they “have been the most rewarding, fun, best, interesting, exhausting, stressful, and—particularly for the last two—unpleasant years of my life so far.”

Delving further into his brief ouster in November 2023 as CEO by the OpenAI board, and subsequent return to the company, Altman called the event “a big failure of governance by well-meaning people, myself included,” noting he wished he had done things differently. In his recent interview with Bloomberg he expanded on that, saying he regrets initially saying he would only return to the company if the whole board quit. He also said there was “real deception” on behalf of the board, who accused him of not being “consistently candid” in his dealings with them. Helen Toner and Tasha McCauley, members of the board at the time, later wrote that senior leaders in the company had approached them with concerns that Altman had cultivated a “toxic culture of lying,” and engaged in behaviour that could be called “psychological abuse.”

Current board members Bret Taylor and Larry Summers have rejected the claims made by Toner and McCauley, and pointed to an investigation of the dismissal by law firm WilmerHale on behalf of the company. They wrote in an op-ed that they “found Mr Altman highly forthcoming on all relevant issues and consistently collegial with his management team.”

The review attributed Altman’s removal to “a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman,” rather than concerns regarding product safety or the pace of development. Commenting on the period following his return as CEO, Altman told Bloomberg, “It was like another government investigation, another old board member leaking fake news to the press. And all those people that I feel like really f—ed me and f—ed the company were gone, and now I had to clean up their mess.” He did not specify what he meant by “fake news.”

Writing about what the experience taught him, Altman said he had “learned the importance of a board with diverse viewpoints and broad experience in managing a complex set of challenges. Good governance requires a lot of trust and credibility.”

Since the end of 2023, many of the companies’ top researchers—including its cofounder and then-chief scientist, Ilya Sutskever, its chief technology officer, Mira Murati, and Alec Radford, who was lead author on the seminal paper that introduced GPT—have left the company.

Read More: Timeline of Recent Accusations Leveled at OpenAI, Sam Altman

In December, OpenAI announced plans to restructure as a public benefit corporation, which would remove the company from control by the nonprofit that tried to fire Altman. The nonprofit would receive shares in the new company, though the value is still being negotiated.

Acknowledging that some might consider discussion of superintelligence as “crazy,” Altman wrote, “We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important,” adding: “Given the possibilities of our work, OpenAI cannot be a normal company.”

News Every Day

The Evolution and Future of Realistic Sex Dolls

Jhalak Dikhhla Jaa 11 fame Dhanashree Verma breaks silence on divorce rumours with Yuzvendra Chahal; says, “I’ve worked hard for years..”

Psychological Aspects of Interacting with Realistic Sex Dolls

Jhanak: Vihaan starts falling in love with Jhanak

Kygrios

Ria.city






Read also

Energy is central to American politics. That all started with Jimmy Carter.

Meghan Markle rocks £210,000 worth of jewellery in new cooking TV show – including a poignant piece of Diana’s

Satellite pics show whole blocks burned to rubble after LA wildfires sparked in backyard…& 5 blazes still out of control

News, articles, comments, with a minute-by-minute update, now on Today24.pro

News Every Day

Jhanak: Vihaan starts falling in love with Jhanak

Today24.pro — latest news 24/7. You can add your news instantly now — here


News Every Day

Kygrios



Sports today


Новости тенниса
Арина Соболенко

Арина Соболенко станцевала вместе с командой после первого титула в сезоне. Видео



Спорт в России и мире
Москва

В один из зимних дней героем рубрики #ЗнайНаших стал сотрудник вневедомственной охраны старший сержант полиции Максим Зубцов



All sports news today





Sports in Russia today

Москва

В один из зимних дней героем рубрики #ЗнайНаших стал сотрудник вневедомственной охраны старший сержант полиции Максим Зубцов


Новости России

Game News

Открыта предрегистрация на RO Ragnarok: Must Be Cute — через неделю начнётся бета-тест


Russian.city


Москва

Консультация юриста в Сургуте по уголовным


Губернаторы России
Росгвардия

В Главном управлении Росгвардии по Московской области подвели итоги работы за 2024 год


«Вест Хэм» представил нового главного тренера

Команда Управления Росгвардии по Ульяновской области заняла призовое место в чемпионате по лыжным гонкам и служебному двоеборью

Названы самые популярные среди туристов новогодние города России

В Иркутске сотрудники ОМОН «Удар» почтили память сослуживца, погибшего при исполнении служебного долга в Чеченской Республике


AI Певица. Создание AI Певицы. AI Певец. AI Артист.

После случившегося на концерте Моргенштерна* рэпера просят перестать выступать и лечь в больницу

Продюсер Маликов объяснил многомиллионные гонорары Сергея Шнурова

Авраам Руссо потребовал 10 млн рублей с обвиняемого в покушении на него


Казахстанский теннисист получил хорошую новость от ATP после громкой сенсации

Окленд (ATP). 1/4 финала. Монфис сыграет с Акостой, Меньшик – с Боржесом

Шнайдер победила Синякову на старте теннисного турнира в Аделаиде, выиграв всухую решающий сет

Иванишевич впервые прокомментировал скандал вокруг Рыбакиной и Вукова



В Главном управлении Росгвардии по Московской области подвели итоги работы за 2024 год

В Главном управлении Росгвардии по Московской области подвели итоги работы за 2024 год

Ветераны СВО будут проходить лечение в центрах реабилитации Социального фонда

В 2024 году Отделение СФР по Москве и Московской области назначило единое пособие родителям 370,5 тысячи детей


Концерт в Клинской детской школе искусств имени П.И. Чайковского

AI Певица. Создание AI Певицы. AI Певец. AI Артист.

Сергей Собянин. Главное за день

Команда Управления Росгвардии по Ульяновской области заняла призовое место в чемпионате по лыжным гонкам и служебному двоеборью


Проблемный участок М-5 "Урал" станет четырехполосным

КВАНТОВАЯ ФИЗИКА, СЕНСАЦИЯ: производство планет, звёзд посильная научная задача. ВАЖНО! Владимир Путин, Дональд Трамп, новости. Россия, США, Европа могут улучшить отношения и здоровье общества?!

Молочно-раздаточные пункты Москвы по офсету получили 150 тысяч тонн продукции

Ефимов: в 2024 году в Москве построили 44 школы и детских сада



Путин в России и мире






Персональные новости Russian.city
Джиган

Вслед за бровями сбрила волосы? Дочь Джигана и Самойловой перешла на парики



News Every Day

Jhanak: Vihaan starts falling in love with Jhanak




Friends of Today24

Музыкальные новости

Персональные новости