Physical AI
Develop world foundation models to advance physical AI.
Overview
NVIDIA Cosmos™ is a platform of state-of-the-art generative world foundation models (WFMs), advanced tokenizers, guardrails, and an accelerated data processing and curation pipeline. It is built to power world model training and accelerate physical AI development for autonomous vehicles (AVs) and robots.
Benefits
Cosmos provides developers with easy access to high-performance world foundation models, data pipelines, and tools to generate synthetic data and post-train for robotics and autonomous driving applications.
World foundation models are pre-trained on 20 million hours of robotics and driving data to generate world states grounded in physics.
Cosmos WFMs, guardrails, and tokenizers are licensed under the NVIDIA Open Model License, allowing access to all physical AI developers.
Models
A family of pretrained multimodal models that developers can use out-of-the-box for world generation and reasoning, or post-train to develop specialized physical AI models.
Generalist model for superior and faster world generation and frame prediction from multimodal input. Trained on 9,000 trillion tokens of robotics and driving data and purpose-built for post-training.
Available as Cosmos NIM for accelerated inference anywhere.
Amplify input video to a variety of environments and lighting conditions for physics-aware world generation conditioned on ground-truth and structured inputs. Speed up controllable synthetic data generation by using ground-truth simulation from NVIDIA Omniverse™.
Fully customizable, multimodal reasoning model for planning response based on spatial and temporal understanding.
Trained using visual-language model post-training and reinforcement learning for chain-of-thoughts reasoning.
Develop responsible models using Cosmos WFM with pre-guard for filtering unsafe inputs and post-guard for consistent and safe outputs.
Tools
Cosmos provides developers with open and highly performant data curation pipelines, tokenizers, training framework, and post-training scripts to quickly and easily build specialized world models like policy models and visual language action (VLA) models for embodied AI.
Hardware
Cosmos WFMs are fully optimized for top-tier NVIDIA GPUs, including those built on the latest Blackwell architecture.
For enterprises running massive, custom multimodal models-such as Cosmos world foundation models, NVIDIA’s GB200 delivers industry-leading speed and scalability for billion-plus parameter workloads. Access on NVIDIA DGX Cloud to develop next-generation AI superclusters and large-scale physical AI applications.
Physical AI developers can leverage server and workstation platforms with NVIDIA RTX PRO 6000 Blackwell GPUs and DGX Cloud to accelerate synthetic data generation using Omniverse and Cosmos. This combination lets you quickly generate physics-based synthetic data. This helps with advanced robotics, self-driving cars, and simulation workflows.
Use Cases
Accelerate downstream foundation model development to advance vision AI and embodied AI with synthetic data generation and post-training.
Omniverse creates realistic 3D scenes that can be used as input for Cosmos Transfer, which amplifies them across diverse, photorealistic environments and lighting. This process generates scalable, augmented data, removing the data bottleneck for more effective foundation model training.
Cosmos Reason can evaluate synthetic data by removing outputs that don’t meet post-training or evaluation requirements. It also generates captions to add context and help organize data, speeding up foundation model development for vision AI and embodied AI.
A policy model guides a physical AI system’s behavior, ensuring that the system operates with safety and in accordance with its goals. Cosmos Predict or Cosmos Reason can be post-trained into policy models to generate actions, saving the cost, time, and data needs of manual policy training.
Cosmos WFMs accelerate policy evaluation by simulating real-world actions through video outputs, using Omniverse ground-truth physics for accuracy. Developers can build a vision-language-action (VLA) model using Cosmos Reason and add it to critique and drive actions. This simulation loop reduces the cost, time, and risk of real-world testing while improving policy precision.
Cosmos Predict can be post-trained to generate multiple views or diverse camera perspectives, enabling high-fidelity, temporally consistent, physics-based training data that contains up to 360° views from a single text, image, or video input.
This boosts model robustness, reduces edge-case failures, and accelerates development cycles for autonomous machines—lowering costs and delivering faster, safer deployments.
Our Commitment
Cosmos models, guardrails, and tokenizers are available on Hugging Face and GitHub, with resources to tackle data scarcity in training physical AI models. We're committed to driving Cosmos forward— transparent, open, and built for all.
Ecosystem
Model developers from the robotics, autonomous vehicles, and vision AI industries are using Cosmos to accelerate physical AI development.
Start with documentation. Cosmos world foundation models are openly available on Hugging Face with inference and post-training scripts on GitHub. Developers can also use Cosmos tokenizer from /NVIDIA/cosmos-tokenizer on GitHub and Hugging Face.
Cosmos world foundation models are available under an NVIDIA Open Model License for all.
PyTorch scripts are openly available for all Cosmos models for post-training. Please read the documentation for a step-by-step guide on post-training.
Yes, you can leverage Cosmos to build from scratch with your preferred foundation model or model architecture. You can start by using NeMo Curator for video data pre-processing. Then compress and decode your data with Cosmos tokenizer. Once you have processed the data, you can train or fine-tune your model using NVIDIA NeMo.
Using NVIDIA NIM™ microservices, you can easily integrate your physical AI models into your applications across cloud, data centers, and workstations.
You can also use NVIDIA DGX Cloud to train AI models and deploy them anywhere at scale.
Omniverse creates realistic 3D simulations of real-world tasks by using different generative APIs, SDKs, and NVIDIA RTX rendering technology.
Developers can input Omniverse simulations as instruction videos to Cosmos Transfer models to generate controllable photoreal synthetic data.
Together, Omniverse provides the simulation environment before and after training, while Cosmos provides the foundation models to generate video data and train physical AI models.
Learn more about NVIDIA Omniverse.