Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
News Every Day |

OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say

OpenAI is unsatisfied with some of Nvidia’s latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom.

The ChatGPT-maker’s shift in strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. Nvidia remains dominant in chips for training large AI models, while inference has become a new front in the competition.

This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia’s AI dominance and comes as the two companies are in investment talks.

In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips.

The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD and others for GPUs built to rival Nvidia’s. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said.

On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was “nonsense” and that Nvidia planned a huge investment in OpenAI.

“Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale,” Nvidia said in a statement.

A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference.

After the Reuters story was published, OpenAI Chief Executive Sam Altman wrote in a post on X that Nvidia makes “the best AI chips in the world” and that OpenAI hoped to remain a “gigantic customer for a very long time”.

Seven sources said that OpenAI is not satisfied with the speed at which Nvidia’s hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI’s inference computing needs in the future, one of the sources told Reuters.

The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI’s talks, one of the sources told Reuters.

Nvidia’s decision to snap up at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said. Nvidia, in a statement, said that Groq’s intellectual property was highly complementary to Nvidia’s product roadmap.

Nvidia’s graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI’s efforts.

The ChatGPT-maker’s search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users.

Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot.

Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing, one of the sources added. OpenAI staff attributed some of Codex’s weakness to Nvidia’s GPU-based hardware, one source said.

In a January 30 call with reporters, Altman said that customers using OpenAI’s coding models will “put a big premium on speed for coding work.”

One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users.

Competing products such as Anthropic’s Claude and Google’s Gemini benefit from deployments that rely more heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips like the Nvidia-designed GPUs.

As OpenAI made clear its reservations about Nvidia technology, Nvidia approached companies working on SRAM-heavy chips, including Cerebras and Groq, about a potential acquisition, the people said. Cerebras declined and struck a commercial deal with OpenAI announced last month. Cerebras declined to comment.

Groq held talks with OpenAI for a deal to provide computing power and received investor interest to fund the company at a valuation of roughly $14 billion, according to people familiar with the discussions. Groq declined to comment.

But by December, Nvidia moved to license Groq’s tech in a non-exclusive all-cash deal, the sources said. Although the deal would allow other companies to license Groq’s technology, the company is now focusing on selling cloud-based software, as Nvidia hired away Groq’s chip designers.

Ria.city






Read also

Spectacular sunsets likely on Oregon Coast this week

Jalen Hurts Leads NFC Past AFC in High-Scoring Pro Bowl Game

4th Doha Process Counter-Narcotics Meeting Held in Kabul

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости