Nvidia CEO Unveils Advanced AI Chips, Foundation Models and Mini Supercomputer
Nvidia CEO Jensen Huang on Monday (Dec. 6) unveiled next-generation chips, new large language models, a mini artificial intelligence (AI) supercomputer, and a partnership with Toyota as the world’s second-most valuable company continues to aggressively expand its business.
“It’s been an extraordinary journey, extraordinary year,” Huang said to a packed crowd during a keynote address Monday at the CES trade show, where he wore a shiny version of his trademark leather jacket to fit in with the Las Vegas venue. This year’s CES event continues this week.
Shares of Nvidia closed at a record high of $149.43 hours ahead of his speech. Chip stocks got a lift after Foxconn, an Apple supplier that assembles AI servers for tech company clients, reported record fourth-quarter revenue, signaling continued strength in AI demand.
Huang confirmed that AI demand remains robust, saying: “Blackwell is in full production.” Blackwell is Nvidia’s latest hardware architecture, which defines the physical and structural layout of chip components. A few months ago, Blackwell chip shipments were delayed due to technical issues.
Those issues were not in display as Huang unveiled the chipmaker’s most advanced gaming chips: the GeForce RTX 50 Series for gamers, creators and developers. It uses the Blackwell architecture and fuses AI-driven neural rendering and ray tracing, both techniques to enhance computer graphics.
The RTX 50 Series can run generative AI models up to twice as fast and uses up less memory than the prior generation of chips.
Nvidia’s Blackwell chips can process trillion-parameter large language models at up to 25 times lower cost and energy consumption than its predecessor, the company has said. The chip could drive a new demand cycle as it improves upon the current Hopper and Ampere architectures to power a new generation of GPUs for intensive workloads in AI, machine learning and high-performance computing. Blackwell competes against AMD’s MI300 chips series, Intel’s AI accelerators, Google’s TPUs and others.
Mini AI supercomputer
Another new product from Nvidia announced Monday is Project DIGITS, a mini AI supercomputer that contains the new Nvidia GB10 Grace Blackwell Superchip. At a petaflop of AI computing speed — performing a quadrillion calculations per second — the chip can be used for prototyping, fine-tuning and running large AI models. Users can develop and run inference — applying new data to a trained AI model — on their desktop before deployment. The superchip consists of an Nvidia Blackwell GPU that connects to an Nvidia Grace CPU via NVLink-C2C (chip-to-chip), an interconnection technology that enables fast communication between chips in multi-chip systems.
AI Everything
Huang said that with the advent of agentic AI — where AI agents or ‘bots’ work with other bots in the background to perform automated tasks — more computing power will be needed as the AI workload has increased.
“In the future, the AI is going to be talking to itself. It’s going to be thinking. It’s going to be internally reflecting, processing,” all of which will create more AI workloads, he said.
New family of large language models
To support AI agents, Nvidia unveiled a new family of language models: The Nvidia Llama Nemotron language foundation models. It uses Meta’s open source Llama language model but fine-tuned for enterprise use in an agentic AI future. Nemotron can help developers create and deploy AI agents across various applications, from customer service to fraud detection.
“AI agents are the new, digital workforce working for and with us,” Huang said. “AI agents are a system of models that reason about a mission, break it down into tasks and retrieve data or use tools to generate a quality response.”
Nemotron comes in three sizes and capabilities: Nano (cheapest), Super and Ultra (highest accuracy and performance). Nvidia did not disclose their model parameters, or model weights that determine the output.
Cosmos: World Foundation Models
With its focus on robotics, Nvidia also unveiled Cosmos, a platform that comprises generative world foundation models — AI systems that can create a virtual environment to simulate a real or virtual world. Developers can use Cosmos to generate troves of synthetic data to use for training their ‘physical AI’ systems, such as robots and autonomous vehicles. Developers can use a text, image or video prompt to create a virtual world. Cosmos is available under an open license, which lets people use, modify or share it.
Huang said Cosmos was trained on 20 million hours of video of tasks to teach the AI about the physical world. As such, it can create captions for videos, which can be used to train multimodal AI models.
Toyota partnership
Huang also announced a new partnership with Toyota. The world’s largest automaker will be building its vehicles based on Nvidia’s Drive AGX Orin, a hardware and software platform that enables advanced driver assistance systems and autonomous driving.
The post Nvidia CEO Unveils Advanced AI Chips, Foundation Models and Mini Supercomputer appeared first on PYMNTS.com.