As its market dominance is being challenged by multiple competitors on a scale never seen before, Nvidia on Monday introduced six new AI chips and newly opened models, showing that the AI giant is determined to stay ahead in the field.
At the CES consumer electronics show in Las Vegas, the AI hardware and software provider released the Nvidia Rubin platform, a system consisting of six new chips that collectively form a AI supercomputerNew open generative AI models include the Nemotron family and more agent-building models in the new World Foundation Model The Cosmos model suite is designed to generate humanoid robots and other physical AI applications as well as synthetic data.
Nvidia also showcased a model that powers autonomous vehicles, the Nvidia Alpamayo, which the vendor previously released in early December.
A full-stack approach
Chirag Dekate, an analyst at Gartner, said the multiple releases show how Nvidia is not only taking a full-stack approach, from chips to software, but also trying to enable third parties to develop their own full-stack products.
“What they’re trying to highlight here is that AI is no longer just a GPU game,” Dekatte said, referring to the ubiquitous graphics processing unit chips that provide training and inference for generative AI models. “It’s no longer about a GPU chip; it’s actually an AI supercomputer.
One way Nvidia is showing that it’s not solely a GPU vendor is by adding a variety of AI chips within the Rubin platform. Components include Nvidia Vera CPU, Nvidia Rubin GPU, Nvidia NVLink 6 switch, Nvidia ConnectX-9 SuperNIC, Nvidia Bluefield-4 DPU, and Nvidia Spectrum 6 Ethernet switch.
The Rubin platform is the successor to the widely used Nvidia Blackwell platform. The Rubin platform uses Nvidia’s NVLink interconnect technology and various Transformer technologies to accelerate the scale of agentic AI, advanced reasoning, and mix-expert models compared to Blackwell.
With the platform, Nvidia is trying to inspire its customers and audiences to look beyond just the GPU and see the entire underlying infrastructure as more than one component. AI FactoryDecate said.
“What Nvidia is trying to highlight is whether you’re trying to solve a problem in the context of model training, or if you’re trying to deploy models at scale, either directly or as part of your agent technology strategy, the underlying infrastructure is likely going to be an AI factory scale problem,” DeCate said. He said this is a problem Nvidia wants to address for data center operators, hyperscalers and enterprise clients.
Decate added, “AI is no longer just a small, simple device issue; it is truly multi-dimensional and multi-form factor.”
This focus on AI more than just a GPU is part of what differentiates Nvidia from its competitors, he said. Competitors include AMD, Intel and Qualcomm.
“Many competitors struggle to meet them,” he said. “They’re starting to get there, but they’re not there yet.”
newly opened models
New models arrive less than a month after release Nemotron 3 family of open modelsDesigned for building and implementing multi-agent systems. These include Nemotron Speech, which includes a new automatic speech recognition model that provides real-time, low latency speech recognition for live caption and speech AI applications, Nvidia said. In addition, Nemotron RAG technology has new embed and rerank vision language models. Nvidia also released datasets, training resources, and blueprints for the models.
In addition to Nemotron, Nvidia expanded the World Foundation model line with Cosmos Region 2, Cosmos Transfer 2.5, and Cosmos Predict. Cosmos Reason 2 is a vision language model that enables robots and AI agents to interact with and understand the physical world. Transfer 2.5 and Predict 2.5 generate synthetic videos in different environments and conditions.
alpamayo 1 model One argument for autonomous vehicles is the vision language model.
While Nvidia isn’t the first vendor to release open models, the way it specifies what each model is for is unique, said Mark Bacque, an analyst at Omdia, a division of Informa TechTarget.
“It’s a little different,” Becque said, adding that specialized open models are not a common approach. However, specializing in open models makes sense because it enables customers with open models to start using them faster, Becque said.
The expertise of open models confirms one of the trends identified for 2026 by the Futurum market research firm, which is the faster implementation of AI models as opposed to generalized models, said Bradley Shimin, an analyst at Futurum.
“You can look at what Nvidia is offering,” Shimin said. “They’re dealing with special problems.”
“They’re applying those models to very specific domains like health care, autonomous vehicles, and very specific use cases in enterprises,” Shimin said. “What they’re doing is not only trying to be the best frontier model builder, but also trying to be the best applied intelligence builder.”
However, even with these models open sourced and Nvidia releasing weights and recipes for them, enterprise adoption is still a challenge, Becque said.
“Companies are still using proprietary models more than using open source,” he said.
Another challenge is that Nvidia’s innovation in models and AI infrastructure Decate said the marketplace would make it harder for enterprises to avoid relying on vendors.
