• The Hidden Tech That Makes Assassin's Creed Shadows Feel More Alive (And Not Require 2TB)

    Most of what happens within the video games we play is invisible to us. Even the elements we're looking straight at work because of what's happening behind the scenes. If you've ever watched a behind-the-scenes video about game development, you might've seen these versions of flat, gray game worlds filled with lines and icons pointing every which way, with multiple grids and layers. These are the visual representations of all the systems that make the game work.Assassin's Creed ShadowsThis is an especially weird dichotomy to consider when it comes to lighting in any game with a 3D perspective, but especially so in high-fidelity games. We don't see light so much as we see everything it touches; it's invisible, but it gives us most of our information about game worlds. And it's a lot more complex than "turn on lamp, room light up." Reflection, absorption, diffusion, subsurface scattering--the movement of light is a complex thing that has been explored by physicists in the real world for literally centuries, and will likely be studied for centuries more. In the middle of all of that are game designers, applying the science of light to video games in practical ways, balanced with the limitations of even today's powerful GPUs, just to show all us nerds a good time.If you've wondered why many games seem to be like static amusement parks waiting for you to interact with a few specific things, lighting is often the reason. But it's also the reason more and more game worlds look vibrant and lifelike. Game developers have gotten good at simulating static lighting, but making it move is harder. Dynamic lighting has long been computationally expensive, potentially tanking game performance, and we're finally starting to see that change.Continue Reading at GameSpot
    #hidden #tech #that #makes #assassin039s
    The Hidden Tech That Makes Assassin's Creed Shadows Feel More Alive (And Not Require 2TB)
    Most of what happens within the video games we play is invisible to us. Even the elements we're looking straight at work because of what's happening behind the scenes. If you've ever watched a behind-the-scenes video about game development, you might've seen these versions of flat, gray game worlds filled with lines and icons pointing every which way, with multiple grids and layers. These are the visual representations of all the systems that make the game work.Assassin's Creed ShadowsThis is an especially weird dichotomy to consider when it comes to lighting in any game with a 3D perspective, but especially so in high-fidelity games. We don't see light so much as we see everything it touches; it's invisible, but it gives us most of our information about game worlds. And it's a lot more complex than "turn on lamp, room light up." Reflection, absorption, diffusion, subsurface scattering--the movement of light is a complex thing that has been explored by physicists in the real world for literally centuries, and will likely be studied for centuries more. In the middle of all of that are game designers, applying the science of light to video games in practical ways, balanced with the limitations of even today's powerful GPUs, just to show all us nerds a good time.If you've wondered why many games seem to be like static amusement parks waiting for you to interact with a few specific things, lighting is often the reason. But it's also the reason more and more game worlds look vibrant and lifelike. Game developers have gotten good at simulating static lighting, but making it move is harder. Dynamic lighting has long been computationally expensive, potentially tanking game performance, and we're finally starting to see that change.Continue Reading at GameSpot #hidden #tech #that #makes #assassin039s
    WWW.GAMESPOT.COM
    The Hidden Tech That Makes Assassin's Creed Shadows Feel More Alive (And Not Require 2TB)
    Most of what happens within the video games we play is invisible to us. Even the elements we're looking straight at work because of what's happening behind the scenes. If you've ever watched a behind-the-scenes video about game development, you might've seen these versions of flat, gray game worlds filled with lines and icons pointing every which way, with multiple grids and layers. These are the visual representations of all the systems that make the game work.Assassin's Creed ShadowsThis is an especially weird dichotomy to consider when it comes to lighting in any game with a 3D perspective, but especially so in high-fidelity games. We don't see light so much as we see everything it touches; it's invisible, but it gives us most of our information about game worlds. And it's a lot more complex than "turn on lamp, room light up." Reflection, absorption, diffusion, subsurface scattering--the movement of light is a complex thing that has been explored by physicists in the real world for literally centuries, and will likely be studied for centuries more. In the middle of all of that are game designers, applying the science of light to video games in practical ways, balanced with the limitations of even today's powerful GPUs, just to show all us nerds a good time.If you've wondered why many games seem to be like static amusement parks waiting for you to interact with a few specific things, lighting is often the reason. But it's also the reason more and more game worlds look vibrant and lifelike. Game developers have gotten good at simulating static lighting, but making it move is harder. Dynamic lighting has long been computationally expensive, potentially tanking game performance, and we're finally starting to see that change.Continue Reading at GameSpot
    0 Commenti 0 condivisioni
  • Why keep burning money on monthly cloud services when you can snag a "mind-blowing" 37% off pCloud? Yes, that's right! Why not just hand your cash over to the void? With pCloud's irresistible price shock, it’s almost like they’re saying, “Why pay more for less?”

    After all, who needs reliability when you can have a discount? So why not store your precious memories and cat videos in the cloud at a bargain price? Because nothing says “I love saving money” quite like entrusting your data to a service on sale.

    #pCloud #CloudStorage #Discounts #DataSavings #TechDeals
    Why keep burning money on monthly cloud services when you can snag a "mind-blowing" 37% off pCloud? Yes, that's right! Why not just hand your cash over to the void? With pCloud's irresistible price shock, it’s almost like they’re saying, “Why pay more for less?” After all, who needs reliability when you can have a discount? So why not store your precious memories and cat videos in the cloud at a bargain price? Because nothing says “I love saving money” quite like entrusting your data to a service on sale. #pCloud #CloudStorage #Discounts #DataSavings #TechDeals
    pCloud à prix choc : réduction -37% avec notre lien affilié
    Pourquoi continuer à payer tous les mois pour un service cloud ? Avec pCloud, vous […] Cet article pCloud à prix choc : réduction -37% avec notre lien affilié a été publié sur REALITE-VIRTUELLE.COM.
    1 Commenti 0 condivisioni
  • In a groundbreaking move that screams "artistic genius" (or perhaps just "what on earth?"), Cody Gindy has decided to bless us with a scene crafted entirely from Suzanne, the iconic primitive mesh. Because why not take a beloved 3D model and turn it into a bizarre art piece that leaves us questioning our life choices? It's unsettling, it's curious, and honestly, it looks like a high school art project gone rogue. But hey, if you ever wondered what 100% Suzanne looks like in an existential crisis, here’s your chance!

    Let’s applaud the creativity—or is it madness? Either way, it works surprisingly well.

    #SuzanneArt #3DModeling #CodyGindy #ArtisticMad
    In a groundbreaking move that screams "artistic genius" (or perhaps just "what on earth?"), Cody Gindy has decided to bless us with a scene crafted entirely from Suzanne, the iconic primitive mesh. Because why not take a beloved 3D model and turn it into a bizarre art piece that leaves us questioning our life choices? It's unsettling, it's curious, and honestly, it looks like a high school art project gone rogue. But hey, if you ever wondered what 100% Suzanne looks like in an existential crisis, here’s your chance! Let’s applaud the creativity—or is it madness? Either way, it works surprisingly well. #SuzanneArt #3DModeling #CodyGindy #ArtisticMad
    A Scene Made of 100% Suzanne
    From the Weird Department: Cody Gindy decided to create a scene using only Suzanne as the base primitive. While a little unsettling, it works well! Source
    1 Commenti 0 condivisioni
  • Resident Evil Requiem Was Almost A Different Game Entirely, Until Capcom Realized "Fans Didn't Want It"

    Earlier this week, Capcom shared a new preview for Resident Evil Requiem, the ninth game in the long-running franchise. According to Capcom, Requiem is already on 1 million wish lists across PSN and Steam. But would the number have been that high if Capcom went through with its original plans to make Requiem an online game?Resident Evil Requiem director Koshi Nakanishi shared an online developer diary on Capcom's official siteand revealed that the game was originally developed as an online title. Nakanishi added that the team came up with "interesting concepts" for the game, but ultimately abandoned their online plans when the team realized that RE fans didn't want it. Instead, Requiem was re-envisioned as a single-player game like its eight predecessors.Considering the origins of Requiem as an online title, that may explain why it features Grace Ashcroft, the daughter of Alyssa Ashcroft from the franchise's first multiplayer game, Resident Evil Outbreak. The story is also picking up the narrative threads from the first three RE games by returning to Raccoon City 30 years after it was bombed to end the zombie infestation. Continue Reading at GameSpot
    #resident #evil #requiem #was #almost
    Resident Evil Requiem Was Almost A Different Game Entirely, Until Capcom Realized "Fans Didn't Want It"
    Earlier this week, Capcom shared a new preview for Resident Evil Requiem, the ninth game in the long-running franchise. According to Capcom, Requiem is already on 1 million wish lists across PSN and Steam. But would the number have been that high if Capcom went through with its original plans to make Requiem an online game?Resident Evil Requiem director Koshi Nakanishi shared an online developer diary on Capcom's official siteand revealed that the game was originally developed as an online title. Nakanishi added that the team came up with "interesting concepts" for the game, but ultimately abandoned their online plans when the team realized that RE fans didn't want it. Instead, Requiem was re-envisioned as a single-player game like its eight predecessors.Considering the origins of Requiem as an online title, that may explain why it features Grace Ashcroft, the daughter of Alyssa Ashcroft from the franchise's first multiplayer game, Resident Evil Outbreak. The story is also picking up the narrative threads from the first three RE games by returning to Raccoon City 30 years after it was bombed to end the zombie infestation. Continue Reading at GameSpot #resident #evil #requiem #was #almost
    WWW.GAMESPOT.COM
    Resident Evil Requiem Was Almost A Different Game Entirely, Until Capcom Realized "Fans Didn't Want It"
    Earlier this week, Capcom shared a new preview for Resident Evil Requiem, the ninth game in the long-running franchise. According to Capcom, Requiem is already on 1 million wish lists across PSN and Steam. But would the number have been that high if Capcom went through with its original plans to make Requiem an online game?Resident Evil Requiem director Koshi Nakanishi shared an online developer diary on Capcom's official site (via VGC) and revealed that the game was originally developed as an online title. Nakanishi added that the team came up with "interesting concepts" for the game, but ultimately abandoned their online plans when the team realized that RE fans didn't want it. Instead, Requiem was re-envisioned as a single-player game like its eight predecessors.Considering the origins of Requiem as an online title, that may explain why it features Grace Ashcroft, the daughter of Alyssa Ashcroft from the franchise's first multiplayer game, Resident Evil Outbreak. The story is also picking up the narrative threads from the first three RE games by returning to Raccoon City 30 years after it was bombed to end the zombie infestation. Continue Reading at GameSpot
    Like
    Love
    Sad
    Angry
    9
    0 Commenti 0 condivisioni
  • It’s absolutely infuriating how the gaming community is still desperate for mods to fix the glaring issues in the Legendary Edition of the Mass Effect trilogy! Why should players have to rely on “wildly impressive mods” to make a classic game even remotely enjoyable? Instead of delivering a polished remaster worthy of the iconic franchise, we get a half-baked product that screams negligence from the developers. It’s 2023, and we’re still waiting for a proper treatment of a beloved series, while modders are left to pick up the slack! This is a disgrace! If you’re thinking of revisiting this so-called classic, don’t let the shiny marketing fool you—prepare for disappointment!

    #MassEffect #GamingCommunity #GameMods
    It’s absolutely infuriating how the gaming community is still desperate for mods to fix the glaring issues in the Legendary Edition of the Mass Effect trilogy! Why should players have to rely on “wildly impressive mods” to make a classic game even remotely enjoyable? Instead of delivering a polished remaster worthy of the iconic franchise, we get a half-baked product that screams negligence from the developers. It’s 2023, and we’re still waiting for a proper treatment of a beloved series, while modders are left to pick up the slack! This is a disgrace! If you’re thinking of revisiting this so-called classic, don’t let the shiny marketing fool you—prepare for disappointment! #MassEffect #GamingCommunity #GameMods
    KOTAKU.COM
    These Two Cool Mass Effect Mods Look Like The Perfect Way To Revisit A Classic Trilogy
    If you’re like me and haven’t played the original Mass Effect trilogy in some time, then boy do I have some good news for you if you have the game on PC or are thinking of grabbing a copy on Steam. A pair of wildly impressive mods for the Legendary E
    1 Commenti 0 condivisioni
  • Onimusha: Way of the Sword sees the classic series renewed with Capcom's latest technology

    Capcom devs explain why the reboot of the Samurai action-horror series was only possible with the RE Engine.
    Onimusha: Way of the Sword sees the classic series renewed with Capcom's latest technology Capcom devs explain why the reboot of the Samurai action-horror series was only possible with the RE Engine.
    Onimusha: Way of the Sword sees the classic series renewed with Capcom's latest technology
    Capcom devs explain why the reboot of the Samurai action-horror series was only possible with the RE Engine.
    2 Commenti 0 condivisioni
  • Why Ready or Not feels so real– how Unreal Engine 5 is delivering next-level immersion


    Void's art director and lead designer reveal what it takes to improve on PC perfection.
    Why Ready or Not feels so real– how Unreal Engine 5 is delivering next-level immersion Void's art director and lead designer reveal what it takes to improve on PC perfection.
    WWW.CREATIVEBLOQ.COM
    Why Ready or Not feels so real– how Unreal Engine 5 is delivering next-level immersion
    Void's art director and lead designer reveal what it takes to improve on PC perfection.
    2 Commenti 0 condivisioni
  • Resident Evil Requiem Continues "Overarching Narrative" That Began In Raccoon City

    Raccoon City was the American setting where Resident Evil began. After multiple instalments set around the world, Resident Evil Requiem producer Masato Kumazawa shared why the series is returning to the ruins of this iconic city.In a PlayStation Blog interview, Kumazawa explained that after more recent titles that had explored "the broader universe" of Resident Evil, Capcom wanted a story that "continues the overarching narrative rooted in Raccoon City and the secret machinations of the Umbrella Corporation." Following the T-virus outbreak, the US government ordered a missile strike to destroy the city in order to eradicate the virus. In having players return to its ruins about 30 years later, Kumazawa said that the team also wanted a character "with a personal connection to the city itself," introducing Grace Ashcroft, the presumed daughter of Resident Evil: Outbreak's Alyssa Ashcroft.Continue Reading at GameSpot
    #resident #evil #requiem #continues #quotoverarching
    Resident Evil Requiem Continues "Overarching Narrative" That Began In Raccoon City
    Raccoon City was the American setting where Resident Evil began. After multiple instalments set around the world, Resident Evil Requiem producer Masato Kumazawa shared why the series is returning to the ruins of this iconic city.In a PlayStation Blog interview, Kumazawa explained that after more recent titles that had explored "the broader universe" of Resident Evil, Capcom wanted a story that "continues the overarching narrative rooted in Raccoon City and the secret machinations of the Umbrella Corporation." Following the T-virus outbreak, the US government ordered a missile strike to destroy the city in order to eradicate the virus. In having players return to its ruins about 30 years later, Kumazawa said that the team also wanted a character "with a personal connection to the city itself," introducing Grace Ashcroft, the presumed daughter of Resident Evil: Outbreak's Alyssa Ashcroft.Continue Reading at GameSpot #resident #evil #requiem #continues #quotoverarching
    WWW.GAMESPOT.COM
    Resident Evil Requiem Continues "Overarching Narrative" That Began In Raccoon City
    Raccoon City was the American setting where Resident Evil began. After multiple instalments set around the world, Resident Evil Requiem producer Masato Kumazawa shared why the series is returning to the ruins of this iconic city.In a PlayStation Blog interview, Kumazawa explained that after more recent titles that had explored "the broader universe" of Resident Evil, Capcom wanted a story that "continues the overarching narrative rooted in Raccoon City and the secret machinations of the Umbrella Corporation." Following the T-virus outbreak, the US government ordered a missile strike to destroy the city in order to eradicate the virus. In having players return to its ruins about 30 years later, Kumazawa said that the team also wanted a character "with a personal connection to the city itself," introducing Grace Ashcroft, the presumed daughter of Resident Evil: Outbreak's Alyssa Ashcroft.Continue Reading at GameSpot
    0 Commenti 0 condivisioni
  • This Is Why High-End Electric Cars Are Failing

    There's a simple reason why high-end EVs have failed to spark the imaginations of auto buyers. To remedy this, manufacturers need to revisit the days of the Model T.
    This Is Why High-End Electric Cars Are Failing There's a simple reason why high-end EVs have failed to spark the imaginations of auto buyers. To remedy this, manufacturers need to revisit the days of the Model T.
    This Is Why High-End Electric Cars Are Failing
    There's a simple reason why high-end EVs have failed to spark the imaginations of auto buyers. To remedy this, manufacturers need to revisit the days of the Model T.
    2 Commenti 0 condivisioni
  • Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
    Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing.
    These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation.
    To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools.
    Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale.
    Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale.
    NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale.
    Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models.

    Foundations for Scalable, Realistic Simulation
    Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots.

    In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools.
    Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos.
    Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing.
    The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases.
    Driving the Future of AV Safety
    To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety.
    The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems.
    These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks.

    At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance.
    Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay:

    Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks.
    Get Plugged Into the World of OpenUSD
    Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote.
    Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14.
    Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute.
    Explore the Alliance for OpenUSD forum and the AOUSD website.
    Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    #into #omniverse #world #foundation #models
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X. #into #omniverse #world #foundation #models
    BLOGS.NVIDIA.COM
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehicles (AVs) across countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models (WFMs) — neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description (OpenUSD), a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    0 Commenti 0 condivisioni
Pagine in Evidenza