Nvidia launches physical AI model for robots

by
0 comments
Nvidia launches physical AI model for robots

The AI ​​hardware and software vendor introduced a new set of models and infrastructure for physical AI at the CES show in Las Vegas this week.

Nvidia also unveiled a group of robots powered by Nvidia’s robotics stack, which spans a variety of sectors and is developed by the companies. boston dynamicsLG Electronics and Nura Robotics.

Hailing this moment as a turning point for the industry, Nvidia founder and CEO Jensen Huang said in a keynote address that moving forward physical ai Opening up a new wave of real-world applications.

“The ChatGPIT moment for robotics has arrived,” Huang said. “Breakthroughs in physical AI – models that understand the real world, reason and plan actions – are unlocking entirely new applications.”

New Open Models for Robot Learning and Reasoning

At the center of Nvidia’s flurry of product moves is the release of new open physical AI models designed to reduce the cost and complexity of building intelligent robots.

Specifically, the vendor introduced Nvidia Cosmos Transfer 2.5 and Nvidia Cosmos Predict 2.5, which are open and fully customizable world models that simulate real-world physics and spatial dynamics.

The models are designed to accurately simulate real-world scenarios and evaluate robotic performance within these environments, an important feature for safety-sensitive systems such as autonomous vehicle and industrial robots.

Connected:When AI-powered humanoid robots make the wrong choice

Nvidia also unveiled Cosmos Reason 2, an open reason visual-language model Which enables machines to “see, understand and act” in the real world like humans, making real-time decisions based on an understanding of logic and physics.

Specifically for humanoid robotics, Nvidia released Isaac GR00T N1.6, an open vision-language-action model built on Cosmos Reason that enables full-body control.

All new models are available on Hugging Face.

Simulation and Orchestration Framework

Scalable simulation and benchmarking can be one of the biggest hurdles in robotic development due to its complexity. To meet this challenge, Nvidia also released two new open source frameworks on GitHub; Nvidia Isaac Lab-Arena and Nvidia OSMO.

Nvidia Isaac Lab-Arena provides a collaborative environment for large-scale robot policy evaluation and benchmarking, integrating with established benchmarks such as Libero and others. robocasa To standardize testing before deployment in the real world.

Meanwhile, Nvidia OSMO is a cloud-native orchestration framework that unifies robotic workflows into a central “command center.” Using the framework, developers can run synthetic data generation, training, and software-in-the-loop testing in workstation and cloud environments to accelerate development cycles.

Connected:University researchers invent tiny programmable robot

OSMO is already being used by developers including Hexagon Robotics and has been integrated into Microsoft’s Azure Robotics Accelerator.

New Jetsons Hardware Targets Humanoids and the Industrial Age

Nvidia also highlighted Jetson Thor and IGX Thor platforms for humanoid and industrial edge computing,

At CES, partners including Neura Robotics, RichTech Robotics, eggbot And LG Electronics demonstrated humanoids and service robots running on Jetson Thor, while companies like Archer are using IGX Thor to bring AI to aviation and other safety-critical environments.

With its latest moves, Nvidia is positioning physical AI as the next major growth area for generative AI, expanding models from screens and servers to factories, hospitals, homes, and public spaces.

Related Articles

Leave a Comment