{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
22
23
24
25
26
27
28
29
30
31
News Every Day |

The Most Important Thing Jensen Huang Said at GTC 2026 Wasn’t About a Chip

0

Jensen Huang was selling a worldview: that AI has moved beyond the training era and into something much bigger, more durable, and potentially far more lucrative. He shared his vision and belief of an economy built on inference, agents, and always-on compute demand.

That was the real signal during his live press Q&A on March 17 with reporters and creators, as well.

Yes, there were the expected hardware flexes. Yes, there were giant numbers. But the deeper story was Huang’s attempt to reframe the AI market around a new operating assumption: the future won’t be defined by who trains the biggest model once, as it has in previous years. It will be defined by who can provide the most useful reasoning, fastest, most cheaply, and at an industrial scale.

That shift matters because it turns AI infrastructure from a boom-time buildout into something closer to a permanent production system. Huang’s most explicit line on that was also one of his most revealing:

“In the future, the computer is actually a manufacturer, a token manufacturing system.”

That’s the thesis.

In Huang’s framing, the old computer retrieved information. The new one produces tokens. And once that becomes the dominant compute pattern, AI demand starts looking less like a temporary capex cycle and more like an industrial base that has to stay on all the time.

That’s also how Nvidia justifies the giant number hanging over the whole event. Huang said Nvidia now sees more than $1 trillion in Blackwell and Rubin chip sales through 2027. In the Q&A, he made a point of saying that the figure was intentionally narrow: “$1 trillion is only Blackwell plus Rubin, only through 2027,” and “it is likely to be larger than what I’m showing.”

He also made clear that those numbers are solely based on Blackwell and Rubin tech and don’t factor in the rest of the business at all.

That’s a massive claim. But the more interesting part is what has to be true for it to hold.

Inference has to become the center of gravity. Not a secondary phase after training, but the core economic engine of AI. I largely believe it will, because as agentic use rises, the number of tokens needed per user increases by multiple orders of magnitude. 

Huang said Nvidia saw this turn coming early, arguing that the company had already redirected enormous effort toward the category: “In 2025, we decided to dedicate an enormous amount of resources to inference.”

That explains why Nvidia increasingly wants to be seen as more than a GPU vendor. The company is positioning itself as the builder of the AI factory: chips, memory systems, networking, storage, orchestration, inference software, enterprise tooling, and the infrastructure needed to keep all of it running at scale. 

“Most people forget that Nvidia’s business is much, much more diversified than a chip company,” Huang said. “We’re full-stack, and we can help people build AI factories anywhere.”

Why agents make enterprise systems more important, not less

The agent story is where that strategy becomes even clearer.

One of the most important sections of the Q&A was Huang’s pushback on the idea that AI agents will hollow out the software layer beneath them. His argument was almost the opposite. Agents won’t replace systems of record; they’ll make them more important. 

“Is SQL going to die because agents are here? No,” he said. “That’s where our ground truth of business is going to be.”

That one quote gets at a broader point Nvidia is making: the future agent stack is not magic. It is grounded in, governed by, and connected to structured systems that businesses can audit and control. That’s the loophole that has blocked many enterprises and industries from adopting AI yet, and he plans to fill that in 2026.

That’s why Huang kept returning to controllability. He mocked the fantasy that AI will simply hallucinate perfect industrial outputs into existence. 

“What are the agentic engineers going to do? Manifest transistors? No.” The point wasn’t just rhetorical. It was strategic. Enterprise AI only becomes valuable when its work can be verified, reproduced, and written back into systems humans trust.

That’s also why Nvidia is pushing into enterprise-safe agent infrastructure. The company is trying to package emerging open-agent frameworks with greater governance, privacy, and operational controls. Huang described that effort in unusually big terms: “Open cloud is the inflection point of AI post-reasoning.”

That may end up being one of the more important lines from the whole session.

The market spent much of the last cycle debating whether agents were mostly demos. Nvidia’s position at GTC 2026 was much more aggressive: agents are becoming real software infrastructure, and the next challenge is making them deployable inside actual businesses.

From model spectacle to systems economics

That shift has consequences all the way down the stack.

If inference becomes continuous and agentic workloads become normal, performance is no longer just about model quality. It becomes a systems problem: memory hierarchy, networking, storage, orchestration, latency, and cost efficiency across very different classes of workloads. Some use cases need giant models. Some need large context windows. Some need very low latency. Some need all three.

Huang’s answer is that the winning platform will be the one that can serve the broadest range of inference workloads with the best mix of hardware and software. That’s part of what made Nvidia’s Groq-related messaging notable. Huang’s argument was essentially that inference is fragmenting into segments, and Nvidia intends to cover those segments better than anyone else.

That’s a more durable claim than simply insisting one chip wins everything.

The Q&A also surfaced another useful theme: AI may increase the tempo of work more than it reduces the amount of it.

“A lot of people say, AI is coming, we’re going to run out of our work, our jobs. It’s exactly the opposite,” he said. “Agents come back in 30 minutes. It used to be you write the product spec. The teams go off, and they work on it for a month… Now that month has turned into 30 minutes.”

The implication is that AI quietly compresses timelines, increases iteration, and throws more partially completed output back at humans faster than orgs are used to handling. That means the bottleneck shifts. It’s less about producing the first draft and more about absorbing that pace.

But, CAN humans absorb that pace? We’ll see. At some point, humans become the bottleneck in the AI stack.

Then there’s robotics, where Huang sounded as confident as ever.

He called autonomous driving “a solved problem” and argued that the rest is largely a matter of engineering and deployment. He made a similar case for robotics more broadly, saying:

“In the next couple of years, you’re gonna see physical AI accelerate incredibly because of reasoning systems.” Later he sharpened that further: “In three years’ time, you’re going to see robots that are extremely, extremely good.”

That fits with the broader direction of GTC, where robotics, simulation, and industrial AI are no longer side-show demos. Nvidia reinforced that view with announcements like its Thinking Machines Lab deployment around Vera Rubin, which helps frame physical AI as part of the same long-term compute buildout, not a separate side bet.

Taken together, the Q&A pointed to a much bigger Nvidia ambition than the usual chip-cycle story.

Huang is arguing that three markets are now collapsing into one: inference as a permanent compute economy, agents as the next enterprise software layer, and physical AI as the next major workload class after reasoning. Or, in cleaner terms: AI is moving from model spectacle to systems economics.

If you only take away one thing, that’s the one it should be.

Nvidia is trying to redefine what AI demand even is. If the future is built on continuous inference, enterprise agents, and industrial-scale deployment, then the biggest winners may no longer be the companies with the cleverest demo. They may be the companies that control the full system on which those demos run.

Nvidia would very much like that company to be Nvidia.

And at GTC 2026, Jensen Huang made that case more clearly than ever.For a deeper look at how Nvidia is pushing into physical AI, robotics, and simulation tools, check out this breakdown from eWeek.

The post The Most Important Thing Jensen Huang Said at GTC 2026 Wasn’t About a Chip appeared first on eWEEK.

Ria.city






Read also

Police Release Bodycam Footage of Justin Timberlake’s DUI Arrest – Very Drunk Pop Star Struggles Through Field Sobriety Test (VIDEO)

Taiwan's Lin to compete in first international event since Paris gender row

Late 'Buffy' Star Nicholas Brendon Previously Spoke About Heart Attack & Spinal Surgeries

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости