• Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
    Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing.
    These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation.
    To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools.
    Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale.
    Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale.
    NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale.
    Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models.

    Foundations for Scalable, Realistic Simulation
    Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots.

    In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools.
    Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos.
    Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing.
    The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases.
    Driving the Future of AV Safety
    To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety.
    The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems.
    These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks.

    At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance.
    Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay:

    Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks.
    Get Plugged Into the World of OpenUSD
    Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote.
    Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14.
    Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute.
    Explore the Alliance for OpenUSD forum and the AOUSD website.
    Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    #into #omniverse #world #foundation #models
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X. #into #omniverse #world #foundation #models
    BLOGS.NVIDIA.COM
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehicles (AVs) across countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models (WFMs) — neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description (OpenUSD), a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    0 Comments 0 Shares
  • Animation Layers – Non-destructive animation workflow for Blender

    Animation Layers – Non-destructive animation workflow for Blender
    More Details Here: /...

    Finally, a way to animate like in pro 3D software! Animation Layers lets you stack, tweak, and blend animation data without touching your base keyframes.

    Use cases: character animation, motion tweaks, walk cycles, facial rigs, camera work
    Add fix layers, motion offsets, or new actions—non-destructively
    Animate faster with layer blending, masking, and real-time previews
    Works with Blender’s native tools & rigs
    Royalty-free license for personal & commercial projects

    Level up your animation workflow /...
    #b3d #BlenderAddon #AnimationTools #NonDestructive #Character #Animation #Motion #Design #blender3d #blender #motiondesign
    #animation #layers #nondestructive #workflow #blender
    🎛️ Animation Layers – Non-destructive animation workflow for Blender
    🎛️ Animation Layers – Non-destructive animation workflow for Blender More Details Here: 👉 /... Finally, a way to animate like in pro 3D software! Animation Layers lets you stack, tweak, and blend animation data without touching your base keyframes. 💡 Use cases: character animation, motion tweaks, walk cycles, facial rigs, camera work 🧠 Add fix layers, motion offsets, or new actions—non-destructively 🔁 Animate faster with layer blending, masking, and real-time previews ⚙️ Works with Blender’s native tools & rigs 📜 Royalty-free license for personal & commercial projects Level up your animation workflow 👉 /... #b3d #BlenderAddon #AnimationTools #NonDestructive #Character #Animation #Motion #Design #blender3d #blender #motiondesign #animation #layers #nondestructive #workflow #blender
    WWW.YOUTUBE.COM
    🎛️ Animation Layers – Non-destructive animation workflow for Blender
    🎛️ Animation Layers – Non-destructive animation workflow for Blender More Details Here: 👉 https://superhivemarket.com/products/... Finally, a way to animate like in pro 3D software! Animation Layers lets you stack, tweak, and blend animation data without touching your base keyframes. 💡 Use cases: character animation, motion tweaks, walk cycles, facial rigs, camera work 🧠 Add fix layers, motion offsets, or new actions—non-destructively 🔁 Animate faster with layer blending, masking, and real-time previews ⚙️ Works with Blender’s native tools & rigs 📜 Royalty-free license for personal & commercial projects Level up your animation workflow 👉 https://superhivemarket.com/products/... #b3d #BlenderAddon #AnimationTools #NonDestructive #Character #Animation #Motion #Design #blender3d #blender #motiondesign (Feed generated with FetchRSS)
    0 Comments 0 Shares
  • Non-destructive animation?

    Non-destructive animation? Yes please
    Then Explore Animation Layers: /...

    Animation Layers lets you stack, tweak, and blend animations in Blender—just like in pro studios.
    More control. Less mess.

    #b3d #BlenderAddon #AnimationTools #3DAnimation #animation #art #blender3d #blender #characteranimation
    #nondestructive #animation
    Non-destructive animation?
    Non-destructive animation? Yes please 🙌 Then Explore Animation Layers: /... Animation Layers lets you stack, tweak, and blend animations in Blender—just like in pro studios. More control. Less mess. #b3d #BlenderAddon #AnimationTools #3DAnimation #animation #art #blender3d #blender #characteranimation #nondestructive #animation
    WWW.YOUTUBE.COM
    Non-destructive animation?
    Non-destructive animation? Yes please 🙌 Then Explore Animation Layers: https://superhivemarket.com/products/... Animation Layers lets you stack, tweak, and blend animations in Blender—just like in pro studios. More control. Less mess. #b3d #BlenderAddon #AnimationTools #3DAnimation #animation #art #blender3d #blender #characteranimation (Feed generated with FetchRSS)
    0 Comments 0 Shares
  • You Can Now Quickly Crop and Edit Photos Before Sharing Them on Android

    Sometimes, you need to make a couple of quick edits before you send off a photo on WhatsApp or Gmail. Perhaps you need to crop out something in the background, or enhance an image to make it clearer. If you're on Android 14 or higher, Google's new Quick Edits feature is here to help. It works kind of like editing screenshots before sharing them, but for everything in your Google Photos library.How to quickly edit photos before sharing them

    Credit: Khamosh Pathak

    To use Quick Edits, first update the Google Photos app on your Android smartphone to the latest available version. Then, choose a photo and tap the Share button. Now, instead of directly seeing the Share menu, you'll see a new screen called Quick Edits.As it stands, this screen is simple. Around the photo, you'll see the familiar crop feature. You can grab the handles on any corner of the image to crop out anything outside of them. This is a free form crop, too, so you won't be limited by aspect ratio.The only other feature here is the Enhance button. This feature performs an auto-enhance edit on your image. There are no customization options here, but it can be useful to quickly brighten up a dull image.When you're ready, tap the Share button below to open the familiar Share menu. Here, you can choose any app to share the image to.A unique aspect of using the Quick Edits feature is that it's limited to the sharing menu. The crop and the enhancement won't be carried back to the original image in the Google Photos app. That could be annoying if you plan to re-share later, but it also keeps your edits nondestructive.How to disable the Quick Edits featureWhile the Quick Edits feature is certainly useful, it's still quite limited. All you can do is crop or perform an auto-enhancement. It would be nice to see some more image editing features added in down the line, similar to those in the screenshot editing tool.

    Credit: Khamosh Pathak

    Adding in cropping based on aspect ratio, a blur tool, and custom editing options could go a long way. In the meantime, if seeing this limited screen every time you go to share a photo is getting on your nerves, there is a way to disable it. When you're in the Quick Edits screen, tap the Settings icon in the top-right corner. Then, from the popup menu, choose the Turn off option.Now, when you share an image, you'll skip directly to the Share menu. You can enable Quick Edits again anytime from Google Photos Settings > Sharing > Quick Edit Before Sharing.
    #you #can #now #quickly #crop
    You Can Now Quickly Crop and Edit Photos Before Sharing Them on Android
    Sometimes, you need to make a couple of quick edits before you send off a photo on WhatsApp or Gmail. Perhaps you need to crop out something in the background, or enhance an image to make it clearer. If you're on Android 14 or higher, Google's new Quick Edits feature is here to help. It works kind of like editing screenshots before sharing them, but for everything in your Google Photos library.How to quickly edit photos before sharing them Credit: Khamosh Pathak To use Quick Edits, first update the Google Photos app on your Android smartphone to the latest available version. Then, choose a photo and tap the Share button. Now, instead of directly seeing the Share menu, you'll see a new screen called Quick Edits.As it stands, this screen is simple. Around the photo, you'll see the familiar crop feature. You can grab the handles on any corner of the image to crop out anything outside of them. This is a free form crop, too, so you won't be limited by aspect ratio.The only other feature here is the Enhance button. This feature performs an auto-enhance edit on your image. There are no customization options here, but it can be useful to quickly brighten up a dull image.When you're ready, tap the Share button below to open the familiar Share menu. Here, you can choose any app to share the image to.A unique aspect of using the Quick Edits feature is that it's limited to the sharing menu. The crop and the enhancement won't be carried back to the original image in the Google Photos app. That could be annoying if you plan to re-share later, but it also keeps your edits nondestructive.How to disable the Quick Edits featureWhile the Quick Edits feature is certainly useful, it's still quite limited. All you can do is crop or perform an auto-enhancement. It would be nice to see some more image editing features added in down the line, similar to those in the screenshot editing tool. Credit: Khamosh Pathak Adding in cropping based on aspect ratio, a blur tool, and custom editing options could go a long way. In the meantime, if seeing this limited screen every time you go to share a photo is getting on your nerves, there is a way to disable it. When you're in the Quick Edits screen, tap the Settings icon in the top-right corner. Then, from the popup menu, choose the Turn off option.Now, when you share an image, you'll skip directly to the Share menu. You can enable Quick Edits again anytime from Google Photos Settings > Sharing > Quick Edit Before Sharing. #you #can #now #quickly #crop
    LIFEHACKER.COM
    You Can Now Quickly Crop and Edit Photos Before Sharing Them on Android
    Sometimes, you need to make a couple of quick edits before you send off a photo on WhatsApp or Gmail. Perhaps you need to crop out something in the background, or enhance an image to make it clearer. If you're on Android 14 or higher, Google's new Quick Edits feature is here to help. It works kind of like editing screenshots before sharing them, but for everything in your Google Photos library.How to quickly edit photos before sharing them Credit: Khamosh Pathak To use Quick Edits, first update the Google Photos app on your Android smartphone to the latest available version. Then, choose a photo and tap the Share button. Now, instead of directly seeing the Share menu, you'll see a new screen called Quick Edits.As it stands, this screen is simple. Around the photo, you'll see the familiar crop feature. You can grab the handles on any corner of the image to crop out anything outside of them. This is a free form crop, too, so you won't be limited by aspect ratio.The only other feature here is the Enhance button. This feature performs an auto-enhance edit on your image. There are no customization options here, but it can be useful to quickly brighten up a dull image.When you're ready, tap the Share button below to open the familiar Share menu. Here, you can choose any app to share the image to.A unique aspect of using the Quick Edits feature is that it's limited to the sharing menu. The crop and the enhancement won't be carried back to the original image in the Google Photos app. That could be annoying if you plan to re-share later, but it also keeps your edits nondestructive.How to disable the Quick Edits featureWhile the Quick Edits feature is certainly useful, it's still quite limited. All you can do is crop or perform an auto-enhancement. It would be nice to see some more image editing features added in down the line, similar to those in the screenshot editing tool. Credit: Khamosh Pathak Adding in cropping based on aspect ratio, a blur tool, and custom editing options could go a long way. In the meantime, if seeing this limited screen every time you go to share a photo is getting on your nerves, there is a way to disable it. When you're in the Quick Edits screen, tap the Settings icon in the top-right corner. Then, from the popup menu, choose the Turn off option.Now, when you share an image, you'll skip directly to the Share menu. You can enable Quick Edits again anytime from Google Photos Settings > Sharing > Quick Edit Before Sharing.
    0 Comments 0 Shares