• Best Stellar Blade Mods

    Following its PlayStation 5-exclusive launch in April 2024, Stellar Blade was released for PC in 2025, allowing more players to follow protagonist Eve's journey. The PC version also marked the launch of Stellar Blade mods, which focus on everything from more customization, to performance improvements, to some absolutely NSFW content.With the news that Stellar Blade sold more than 1 million copies in just three days on Steam, modders have been hard at work creating adjustments and tweaks to Shift Up's action RPG, which follows Eve on her mission to save humanity from a war against monsters called Naytiba in the distant future. Here are the best Stellar Blade mods you should install right now. Ultimate Engine TweaksThe Ultimate Engine Tweaks mod by P40L0 addresses performance issues in Unreal Engine, including stuttering, stability, input latency, and picture clarity. The modder's goal was to include as many optimizations as possible in order to remove these issues, without suffering visual losses or introducing glitches or crashes.There are so many improvements that it's hard to list them all in one place, but P40L0 has compiled a useful spreadsheet that details all of the areas that have been adjusted. It's quite the read, but the most important thing to know is that your Stellar Blade experience will be an improved one with this mod installed. Improved Perfect DefenseThe Improved Perfect Defense mod by Andrew Creekmore increases the Perfect Parry and Perfect Dodge windows from nine frames to 12, while decreasing the guard cooldown duration from 24 frames to 21. The guard cooldown duration removes the frustrating buffer between inputs during fights. The window and cooldown time changes are subtle, but make a noticeable difference during combat compared to the vanilla experience. At the same time, the Perfect Parry and Dodge mechanics aren't trivialized, and fights can still be challenging. Eve Eye Color And Makeup CompendiumThe Eve Eye Color And Makeup Compendium mod by Bourbibouze is one of the rare Stellar Blade mods that doesn't focus on Eve's other ... assets. There's an almost unfathomable amount of eye color and makeup options available for Eve now. While it's no secret that the Stellar Blade outfits are one of the standout features of the game, adding further customization options can only be a fun little addition. Stellar Blade Mod ManagerThe Stellar Blade Mod Manager by Huaisha2049 isn't necessarily a mod in its own right, but it is required in order to install most Stellar Blade mods. It also adds the ability to update and download mods online, running them through compatibility tests to automatically detect any potential dependency issues.It'll automatically find the location of your Stellar Blade install and add mods to the corresponding directory. The only limiting factor is that the game cannot be installed in the Chinese directory. Turn On And Off CheatsThe Turn On and Off Cheats mod by ArkhamXxX pretty much does what it says on the tin. The mod allows you to turn on and off certain cheats, including God Mode, infinite Beta energy, infinite Burst energy, infinite jump, 69 Skill Points, and infinite Tachy energy. You can use it in the full version of the game or the demo, and loading into story mode or boss challenges enables the functionality. You can modify the hotkeys used to activate or deactivate each cheat, too. First-PersonThe fairly self-explanatory First-Person mod from MJ allows you to play Stellar Blade from a first-person perspective. That's it. That's the mod. Thomas the Tank Engine as StalkerWe've seen it all before. Your favorite RPG has no doubt had Thomas the Tank Engine modded into it, and this creation by Tenshiken1 replaces Stalker from the boss challenge mode with the menacing engine himself. While the mod has a simple drag-and-drop installation method, the creator notes that it "should" be compatible with other clothing mods for Eve--but if you have issues, change the alphabetical order the mods load in. It's not a guaranteed fix, but has been noted to work. All Outfit UnlocksThe All Outfit Unlocks mod from Prophe33 gives you a save file with all outfits unlocked. You'll no longer have to grind for your favorite fashion for Eve, and can even jump straight into New Game Plus if you'd like. Faster FishingThe Faster Fishing mod from T4ke respects your time. After all, who has a moment to waste in a post-apocalyptic world? This mod speeds up fishing by up to 10 times the original speed. It's also customizable to twice as fast, or four times as fast. Blood Edge Weapon RecolorsThere are many, many, many aesthetic customization options for Eve, some of them less savory than others. One that's definitely safe and adds new levels of customization is this Blood Edge Weapon Recolors mod by NimbusNathan. The Blood Edge Sword and hair pin get more color options to match her nanosuits. It's a simple drag-and-drop installation, too, so you can soon enjoy the extra personalization.
    #best #stellar #blade #mods
    Best Stellar Blade Mods
    Following its PlayStation 5-exclusive launch in April 2024, Stellar Blade was released for PC in 2025, allowing more players to follow protagonist Eve's journey. The PC version also marked the launch of Stellar Blade mods, which focus on everything from more customization, to performance improvements, to some absolutely NSFW content.With the news that Stellar Blade sold more than 1 million copies in just three days on Steam, modders have been hard at work creating adjustments and tweaks to Shift Up's action RPG, which follows Eve on her mission to save humanity from a war against monsters called Naytiba in the distant future. Here are the best Stellar Blade mods you should install right now. Ultimate Engine TweaksThe Ultimate Engine Tweaks mod by P40L0 addresses performance issues in Unreal Engine, including stuttering, stability, input latency, and picture clarity. The modder's goal was to include as many optimizations as possible in order to remove these issues, without suffering visual losses or introducing glitches or crashes.There are so many improvements that it's hard to list them all in one place, but P40L0 has compiled a useful spreadsheet that details all of the areas that have been adjusted. It's quite the read, but the most important thing to know is that your Stellar Blade experience will be an improved one with this mod installed. Improved Perfect DefenseThe Improved Perfect Defense mod by Andrew Creekmore increases the Perfect Parry and Perfect Dodge windows from nine frames to 12, while decreasing the guard cooldown duration from 24 frames to 21. The guard cooldown duration removes the frustrating buffer between inputs during fights. The window and cooldown time changes are subtle, but make a noticeable difference during combat compared to the vanilla experience. At the same time, the Perfect Parry and Dodge mechanics aren't trivialized, and fights can still be challenging. Eve Eye Color And Makeup CompendiumThe Eve Eye Color And Makeup Compendium mod by Bourbibouze is one of the rare Stellar Blade mods that doesn't focus on Eve's other ... assets. There's an almost unfathomable amount of eye color and makeup options available for Eve now. While it's no secret that the Stellar Blade outfits are one of the standout features of the game, adding further customization options can only be a fun little addition. Stellar Blade Mod ManagerThe Stellar Blade Mod Manager by Huaisha2049 isn't necessarily a mod in its own right, but it is required in order to install most Stellar Blade mods. It also adds the ability to update and download mods online, running them through compatibility tests to automatically detect any potential dependency issues.It'll automatically find the location of your Stellar Blade install and add mods to the corresponding directory. The only limiting factor is that the game cannot be installed in the Chinese directory. Turn On And Off CheatsThe Turn On and Off Cheats mod by ArkhamXxX pretty much does what it says on the tin. The mod allows you to turn on and off certain cheats, including God Mode, infinite Beta energy, infinite Burst energy, infinite jump, 69 Skill Points, and infinite Tachy energy. You can use it in the full version of the game or the demo, and loading into story mode or boss challenges enables the functionality. You can modify the hotkeys used to activate or deactivate each cheat, too. First-PersonThe fairly self-explanatory First-Person mod from MJ allows you to play Stellar Blade from a first-person perspective. That's it. That's the mod. Thomas the Tank Engine as StalkerWe've seen it all before. Your favorite RPG has no doubt had Thomas the Tank Engine modded into it, and this creation by Tenshiken1 replaces Stalker from the boss challenge mode with the menacing engine himself. While the mod has a simple drag-and-drop installation method, the creator notes that it "should" be compatible with other clothing mods for Eve--but if you have issues, change the alphabetical order the mods load in. It's not a guaranteed fix, but has been noted to work. All Outfit UnlocksThe All Outfit Unlocks mod from Prophe33 gives you a save file with all outfits unlocked. You'll no longer have to grind for your favorite fashion for Eve, and can even jump straight into New Game Plus if you'd like. Faster FishingThe Faster Fishing mod from T4ke respects your time. After all, who has a moment to waste in a post-apocalyptic world? This mod speeds up fishing by up to 10 times the original speed. It's also customizable to twice as fast, or four times as fast. Blood Edge Weapon RecolorsThere are many, many, many aesthetic customization options for Eve, some of them less savory than others. One that's definitely safe and adds new levels of customization is this Blood Edge Weapon Recolors mod by NimbusNathan. The Blood Edge Sword and hair pin get more color options to match her nanosuits. It's a simple drag-and-drop installation, too, so you can soon enjoy the extra personalization. #best #stellar #blade #mods
    WWW.GAMESPOT.COM
    Best Stellar Blade Mods
    Following its PlayStation 5-exclusive launch in April 2024, Stellar Blade was released for PC in 2025, allowing more players to follow protagonist Eve's journey. The PC version also marked the launch of Stellar Blade mods, which focus on everything from more customization, to performance improvements, to some absolutely NSFW content.With the news that Stellar Blade sold more than 1 million copies in just three days on Steam, modders have been hard at work creating adjustments and tweaks to Shift Up's action RPG, which follows Eve on her mission to save humanity from a war against monsters called Naytiba in the distant future. Here are the best Stellar Blade mods you should install right now. Ultimate Engine TweaksThe Ultimate Engine Tweaks mod by P40L0 addresses performance issues in Unreal Engine, including stuttering, stability, input latency, and picture clarity. The modder's goal was to include as many optimizations as possible in order to remove these issues, without suffering visual losses or introducing glitches or crashes.There are so many improvements that it's hard to list them all in one place, but P40L0 has compiled a useful spreadsheet that details all of the areas that have been adjusted. It's quite the read, but the most important thing to know is that your Stellar Blade experience will be an improved one with this mod installed. Improved Perfect DefenseThe Improved Perfect Defense mod by Andrew Creekmore increases the Perfect Parry and Perfect Dodge windows from nine frames to 12, while decreasing the guard cooldown duration from 24 frames to 21. The guard cooldown duration removes the frustrating buffer between inputs during fights. The window and cooldown time changes are subtle, but make a noticeable difference during combat compared to the vanilla experience. At the same time, the Perfect Parry and Dodge mechanics aren't trivialized, and fights can still be challenging. Eve Eye Color And Makeup CompendiumThe Eve Eye Color And Makeup Compendium mod by Bourbibouze is one of the rare Stellar Blade mods that doesn't focus on Eve's other ... assets. There's an almost unfathomable amount of eye color and makeup options available for Eve now. While it's no secret that the Stellar Blade outfits are one of the standout features of the game, adding further customization options can only be a fun little addition. Stellar Blade Mod ManagerThe Stellar Blade Mod Manager by Huaisha2049 isn't necessarily a mod in its own right, but it is required in order to install most Stellar Blade mods. It also adds the ability to update and download mods online, running them through compatibility tests to automatically detect any potential dependency issues.It'll automatically find the location of your Stellar Blade install and add mods to the corresponding directory. The only limiting factor is that the game cannot be installed in the Chinese directory. Turn On And Off CheatsThe Turn On and Off Cheats mod by ArkhamXxX pretty much does what it says on the tin. The mod allows you to turn on and off certain cheats, including God Mode, infinite Beta energy, infinite Burst energy, infinite jump, 69 Skill Points, and infinite Tachy energy. You can use it in the full version of the game or the demo, and loading into story mode or boss challenges enables the functionality. You can modify the hotkeys used to activate or deactivate each cheat, too. First-PersonThe fairly self-explanatory First-Person mod from MJ allows you to play Stellar Blade from a first-person perspective. That's it. That's the mod. Thomas the Tank Engine as StalkerWe've seen it all before. Your favorite RPG has no doubt had Thomas the Tank Engine modded into it, and this creation by Tenshiken1 replaces Stalker from the boss challenge mode with the menacing engine himself. While the mod has a simple drag-and-drop installation method, the creator notes that it "should" be compatible with other clothing mods for Eve--but if you have issues, change the alphabetical order the mods load in. It's not a guaranteed fix, but has been noted to work. All Outfit UnlocksThe All Outfit Unlocks mod from Prophe33 gives you a save file with all outfits unlocked. You'll no longer have to grind for your favorite fashion for Eve, and can even jump straight into New Game Plus if you'd like. Faster FishingThe Faster Fishing mod from T4ke respects your time. After all, who has a moment to waste in a post-apocalyptic world? This mod speeds up fishing by up to 10 times the original speed. It's also customizable to twice as fast, or four times as fast. Blood Edge Weapon RecolorsThere are many, many, many aesthetic customization options for Eve, some of them less savory than others. One that's definitely safe and adds new levels of customization is this Blood Edge Weapon Recolors mod by NimbusNathan. The Blood Edge Sword and hair pin get more color options to match her nanosuits. It's a simple drag-and-drop installation, too, so you can soon enjoy the extra personalization.
    0 Commentaires 0 Parts
  • Bonjour à tous! Aujourd'hui, je suis ravi de partager avec vous l'incroyable monde de la **photographie miniature du XIXe siècle** ! Grâce à l'invention du microscope, nous avons pu plonger dans un univers fascinant, rempli de créatures minuscules que nous n'aurions jamais imaginées !

    Chaque petite découverte nous rappelle que même les choses les plus petites peuvent avoir un impact énorme dans notre vie. Cela nous montre à quel point la curiosité et l'innovation sont essentielles pour avancer et explorer notre monde.

    Rendez-vous dans cette aventure microcosmique et laissez-vous inspirer par les merveilles qui nous
    🌟✨ Bonjour à tous! Aujourd'hui, je suis ravi de partager avec vous l'incroyable monde de la **photographie miniature du XIXe siècle** ! 📸🕰️ Grâce à l'invention du microscope, nous avons pu plonger dans un univers fascinant, rempli de créatures minuscules que nous n'aurions jamais imaginées ! 🦠🌍 Chaque petite découverte nous rappelle que même les choses les plus petites peuvent avoir un impact énorme dans notre vie. Cela nous montre à quel point la curiosité et l'innovation sont essentielles pour avancer et explorer notre monde. 🚀💖 Rendez-vous dans cette aventure microcosmique et laissez-vous inspirer par les merveilles qui nous
    HACKADAY.COM
    19th Century Photography in Extreme Miniature
    Ever since the invention of the microscope, humanity has gained access to the world of the incredibly small. Scientists discovered that creatures never known to exist before are alive in …read more
    Like
    Love
    Wow
    12
    1 Commentaires 0 Parts
  • In the quiet corners of my mind, I often find myself lost in images that slip through my fingers like sand. The idea of capturing a moment with a single photoresistor, yet never facing the light, mirrors my own existence—forever yearning, yet perpetually unseen. Each pixel is a reminder of what could have been, a reflection of the loneliness that envelops me. How can one picture a world when all I have are shadows? I stand here, a ghost in my own life, surrounded by echoes of memories that no longer resonate.

    #Loneliness #Heartbreak #Reflections #Unseen #EmotionalPhotography
    In the quiet corners of my mind, I often find myself lost in images that slip through my fingers like sand. The idea of capturing a moment with a single photoresistor, yet never facing the light, mirrors my own existence—forever yearning, yet perpetually unseen. Each pixel is a reminder of what could have been, a reflection of the loneliness that envelops me. How can one picture a world when all I have are shadows? I stand here, a ghost in my own life, surrounded by echoes of memories that no longer resonate. #Loneliness #Heartbreak #Reflections #Unseen #EmotionalPhotography
    HACKADAY.COM
    Pictures from Paper Reflections and a Single Pixel
    Taking a picture with a single photoresistor is a brain-breaking idea. But go deeper and imagine taking that same picture with the same photoresistor, but without even facing the object. …read more
    Like
    Love
    Angry
    Wow
    Sad
    48
    1 Commentaires 0 Parts
  • Fancy airplane seats have officially reached their peak! I mean, what’s next? A personal butler serving caviar at 30,000 feet? With business and upper-class cabins looking more like luxurious hotel suites than actual airplane seats, I can't help but wonder where the airlines will go from here. Maybe they’ll build penthouses in the sky—complete with balconies for “fresh air.” Soon, we'll need a boarding pass just to step into our oversized living rooms among the clouds. Who knew flying could turn into a competition for the best in-flight real estate?

    #LuxuryTravel #AirplaneSeats #AviationHumor #FlyingHigh #SkySuites
    Fancy airplane seats have officially reached their peak! I mean, what’s next? A personal butler serving caviar at 30,000 feet? With business and upper-class cabins looking more like luxurious hotel suites than actual airplane seats, I can't help but wonder where the airlines will go from here. Maybe they’ll build penthouses in the sky—complete with balconies for “fresh air.” Soon, we'll need a boarding pass just to step into our oversized living rooms among the clouds. Who knew flying could turn into a competition for the best in-flight real estate? #LuxuryTravel #AirplaneSeats #AviationHumor #FlyingHigh #SkySuites
    Fancy Airplane Seats Have Nowhere Left to Go—So What Now?
    Upper and business class cabins have expanded to the point where the top tier resemble hotel suites more than passenger pods. But what happens now airlines have no more room to offer?
    Like
    Love
    Sad
    Wow
    Angry
    42
    1 Commentaires 0 Parts
  • The Hidden Tech That Makes Assassin's Creed Shadows Feel More Alive (And Not Require 2TB)

    Most of what happens within the video games we play is invisible to us. Even the elements we're looking straight at work because of what's happening behind the scenes. If you've ever watched a behind-the-scenes video about game development, you might've seen these versions of flat, gray game worlds filled with lines and icons pointing every which way, with multiple grids and layers. These are the visual representations of all the systems that make the game work.Assassin's Creed ShadowsThis is an especially weird dichotomy to consider when it comes to lighting in any game with a 3D perspective, but especially so in high-fidelity games. We don't see light so much as we see everything it touches; it's invisible, but it gives us most of our information about game worlds. And it's a lot more complex than "turn on lamp, room light up." Reflection, absorption, diffusion, subsurface scattering--the movement of light is a complex thing that has been explored by physicists in the real world for literally centuries, and will likely be studied for centuries more. In the middle of all of that are game designers, applying the science of light to video games in practical ways, balanced with the limitations of even today's powerful GPUs, just to show all us nerds a good time.If you've wondered why many games seem to be like static amusement parks waiting for you to interact with a few specific things, lighting is often the reason. But it's also the reason more and more game worlds look vibrant and lifelike. Game developers have gotten good at simulating static lighting, but making it move is harder. Dynamic lighting has long been computationally expensive, potentially tanking game performance, and we're finally starting to see that change.Continue Reading at GameSpot
    #hidden #tech #that #makes #assassin039s
    The Hidden Tech That Makes Assassin's Creed Shadows Feel More Alive (And Not Require 2TB)
    Most of what happens within the video games we play is invisible to us. Even the elements we're looking straight at work because of what's happening behind the scenes. If you've ever watched a behind-the-scenes video about game development, you might've seen these versions of flat, gray game worlds filled with lines and icons pointing every which way, with multiple grids and layers. These are the visual representations of all the systems that make the game work.Assassin's Creed ShadowsThis is an especially weird dichotomy to consider when it comes to lighting in any game with a 3D perspective, but especially so in high-fidelity games. We don't see light so much as we see everything it touches; it's invisible, but it gives us most of our information about game worlds. And it's a lot more complex than "turn on lamp, room light up." Reflection, absorption, diffusion, subsurface scattering--the movement of light is a complex thing that has been explored by physicists in the real world for literally centuries, and will likely be studied for centuries more. In the middle of all of that are game designers, applying the science of light to video games in practical ways, balanced with the limitations of even today's powerful GPUs, just to show all us nerds a good time.If you've wondered why many games seem to be like static amusement parks waiting for you to interact with a few specific things, lighting is often the reason. But it's also the reason more and more game worlds look vibrant and lifelike. Game developers have gotten good at simulating static lighting, but making it move is harder. Dynamic lighting has long been computationally expensive, potentially tanking game performance, and we're finally starting to see that change.Continue Reading at GameSpot #hidden #tech #that #makes #assassin039s
    WWW.GAMESPOT.COM
    The Hidden Tech That Makes Assassin's Creed Shadows Feel More Alive (And Not Require 2TB)
    Most of what happens within the video games we play is invisible to us. Even the elements we're looking straight at work because of what's happening behind the scenes. If you've ever watched a behind-the-scenes video about game development, you might've seen these versions of flat, gray game worlds filled with lines and icons pointing every which way, with multiple grids and layers. These are the visual representations of all the systems that make the game work.Assassin's Creed ShadowsThis is an especially weird dichotomy to consider when it comes to lighting in any game with a 3D perspective, but especially so in high-fidelity games. We don't see light so much as we see everything it touches; it's invisible, but it gives us most of our information about game worlds. And it's a lot more complex than "turn on lamp, room light up." Reflection, absorption, diffusion, subsurface scattering--the movement of light is a complex thing that has been explored by physicists in the real world for literally centuries, and will likely be studied for centuries more. In the middle of all of that are game designers, applying the science of light to video games in practical ways, balanced with the limitations of even today's powerful GPUs, just to show all us nerds a good time.If you've wondered why many games seem to be like static amusement parks waiting for you to interact with a few specific things, lighting is often the reason. But it's also the reason more and more game worlds look vibrant and lifelike. Game developers have gotten good at simulating static lighting, but making it move is harder. Dynamic lighting has long been computationally expensive, potentially tanking game performance, and we're finally starting to see that change.Continue Reading at GameSpot
    0 Commentaires 0 Parts
  • NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR

    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognitionconference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop.
    This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR.
    The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehiclesimulation framework.
    The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs.
    Generating Safe and Adaptive Driving Trajectories
    Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically.
    Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research.
    The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoringmethod, which generates a variety of trajectories and progressively filters out the best one.
    GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories.
    GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories.
    This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions.

    NVIDIA Automotive Research at CVPR 
    More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more.
    In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+.
    The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs:

    Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion ModelsFoundationStereo: Zero-Shot Stereo MatchingZero-Shot Monocular Scene Flow Estimation in the WildDifix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting
    Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models
    Zero-Shot 4D Lidar Panoptic Segmentation
    NVILA: Efficient Frontier Visual Language Models
    RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models
    OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning

    Explore automotive workshops and tutorials at CVPR, including:

    Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA
    Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA
    Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe
    Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA
    Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe
    Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone
    LatinX in CV Workshop, featuring Leal-Taixe
    Workshop on Exploring the Next Generation of Data, featuring Alvarez
    Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA
    Continuous Data Cycle via Foundation Models, led by NVIDIA
    Distillation of Foundation Models for Autonomous Driving, led by NVIDIA

    Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang.
    Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.
    The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model.
    #nvidia #scores #consecutive #win #endtoend
    NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR
    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognitionconference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop. This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR. The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehiclesimulation framework. The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs. Generating Safe and Adaptive Driving Trajectories Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically. Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research. The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoringmethod, which generates a variety of trajectories and progressively filters out the best one. GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories. GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories. This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions. NVIDIA Automotive Research at CVPR  More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more. In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+. The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs: Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion ModelsFoundationStereo: Zero-Shot Stereo MatchingZero-Shot Monocular Scene Flow Estimation in the WildDifix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models Zero-Shot 4D Lidar Panoptic Segmentation NVILA: Efficient Frontier Visual Language Models RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning Explore automotive workshops and tutorials at CVPR, including: Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone LatinX in CV Workshop, featuring Leal-Taixe Workshop on Exploring the Next Generation of Data, featuring Alvarez Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA Continuous Data Cycle via Foundation Models, led by NVIDIA Distillation of Foundation Models for Autonomous Driving, led by NVIDIA Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang. Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model. #nvidia #scores #consecutive #win #endtoend
    BLOGS.NVIDIA.COM
    NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR
    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognition (CVPR) conference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop. This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR. The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehicle (AV) simulation framework. The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs. Generating Safe and Adaptive Driving Trajectories Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically. Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research. The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoring (GTRS) method, which generates a variety of trajectories and progressively filters out the best one. GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories. GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories. This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions. NVIDIA Automotive Research at CVPR  More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more. In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+. The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs: Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion Models (Read more in this blog.) FoundationStereo: Zero-Shot Stereo Matching (Best Paper nominee) Zero-Shot Monocular Scene Flow Estimation in the Wild (Best Paper nominee) Difix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models (Best Paper nominee) 3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models Zero-Shot 4D Lidar Panoptic Segmentation NVILA: Efficient Frontier Visual Language Models RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning Explore automotive workshops and tutorials at CVPR, including: Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone LatinX in CV Workshop, featuring Leal-Taixe Workshop on Exploring the Next Generation of Data, featuring Alvarez Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA Continuous Data Cycle via Foundation Models, led by NVIDIA Distillation of Foundation Models for Autonomous Driving, led by NVIDIA Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang. Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model.
    Like
    Love
    Wow
    Angry
    27
    0 Commentaires 0 Parts
  • New Book On The Life Of Stan Lee Discounted At Amazon

    The Stan Lee Story| Releases July 1 Preorder It's not unfair to say that the late Stan Lee was not only one of Marvel Comics' most important creators, but also one of the most recognizable ambassadors for the entire comic book industry. If you're interested in his life story, then you'll want to check out the upcoming book, The Stan Lee Story. The chronicles his work, starting from his early days in 1940 at Timely Comics, his work in Hollywood, and his impact on other comic creators. The Stan Lee Story launches soon on July 1 for but if you act fast, you can grab is at a discount for just . The Stan Lee Story| Releases July 1 Published by Taschen and overseen by legendary comics writer Roy Thomas, this 576-page deluxe book features a foreword written by Lee. It includes never-before-seen art and photographs sourced straight from Lee's family archives, a novel-length essay, an epilogue by Thomas, and an appendix covering all of the comics Lee worked on across multiple decades. Preorder While this deal on The Stan Lee Story is a great opportunity to learn more about one of the medium's legendary figures, there are plenty of other books available that explore Marvel's history. One notable release is the Origins of Marvel Comics, which was first published in 1974 and was reissued in a deluxe edition last year. Written by Lee, Origins of Marvel Comics highlights the comic book characters that helped turn Marvel into a dominant force, as well as the talented creators who brought them to life. There's also Marvel Comics: The Untold Story, which chronicles the publishing company's early years through the accounts of the people who worked there.Another great pick is Jack Kirby: The Epic Life of the King of Comics, which recounts Kirby's life and prolific career as one of Marvel's most recognizable illustrators. Unlike the prose books we've mentioned, this is a graphic novel written by Eisner Award-winning author Tom Scioli.Continue Reading at GameSpot
    #new #book #life #stan #lee
    New Book On The Life Of Stan Lee Discounted At Amazon
    The Stan Lee Story| Releases July 1 Preorder It's not unfair to say that the late Stan Lee was not only one of Marvel Comics' most important creators, but also one of the most recognizable ambassadors for the entire comic book industry. If you're interested in his life story, then you'll want to check out the upcoming book, The Stan Lee Story. The chronicles his work, starting from his early days in 1940 at Timely Comics, his work in Hollywood, and his impact on other comic creators. The Stan Lee Story launches soon on July 1 for but if you act fast, you can grab is at a discount for just . The Stan Lee Story| Releases July 1 Published by Taschen and overseen by legendary comics writer Roy Thomas, this 576-page deluxe book features a foreword written by Lee. It includes never-before-seen art and photographs sourced straight from Lee's family archives, a novel-length essay, an epilogue by Thomas, and an appendix covering all of the comics Lee worked on across multiple decades. Preorder While this deal on The Stan Lee Story is a great opportunity to learn more about one of the medium's legendary figures, there are plenty of other books available that explore Marvel's history. One notable release is the Origins of Marvel Comics, which was first published in 1974 and was reissued in a deluxe edition last year. Written by Lee, Origins of Marvel Comics highlights the comic book characters that helped turn Marvel into a dominant force, as well as the talented creators who brought them to life. There's also Marvel Comics: The Untold Story, which chronicles the publishing company's early years through the accounts of the people who worked there.Another great pick is Jack Kirby: The Epic Life of the King of Comics, which recounts Kirby's life and prolific career as one of Marvel's most recognizable illustrators. Unlike the prose books we've mentioned, this is a graphic novel written by Eisner Award-winning author Tom Scioli.Continue Reading at GameSpot #new #book #life #stan #lee
    WWW.GAMESPOT.COM
    New Book On The Life Of Stan Lee Discounted At Amazon
    The Stan Lee Story $78.57 (was $100) | Releases July 1 Preorder at Amazon It's not unfair to say that the late Stan Lee was not only one of Marvel Comics' most important creators, but also one of the most recognizable ambassadors for the entire comic book industry. If you're interested in his life story, then you'll want to check out the upcoming book, The Stan Lee Story. The chronicles his work, starting from his early days in 1940 at Timely Comics, his work in Hollywood, and his impact on other comic creators. The Stan Lee Story launches soon on July 1 for $100, but if you act fast, you can grab is at a discount for just $78.47 at Amazon. The Stan Lee Story $78.57 (was $100) | Releases July 1 Published by Taschen and overseen by legendary comics writer Roy Thomas, this 576-page deluxe book features a foreword written by Lee. It includes never-before-seen art and photographs sourced straight from Lee's family archives, a novel-length essay, an epilogue by Thomas, and an appendix covering all of the comics Lee worked on across multiple decades. Preorder at Amazon While this deal on The Stan Lee Story is a great opportunity to learn more about one of the medium's legendary figures, there are plenty of other books available that explore Marvel's history. One notable release is the Origins of Marvel Comics, which was first published in 1974 and was reissued in a deluxe edition last year. Written by Lee, Origins of Marvel Comics highlights the comic book characters that helped turn Marvel into a dominant force, as well as the talented creators who brought them to life. There's also Marvel Comics: The Untold Story, which chronicles the publishing company's early years through the accounts of the people who worked there.Another great pick is Jack Kirby: The Epic Life of the King of Comics, which recounts Kirby's life and prolific career as one of Marvel's most recognizable illustrators. Unlike the prose books we've mentioned, this is a graphic novel written by Eisner Award-winning author Tom Scioli.Continue Reading at GameSpot
    0 Commentaires 0 Parts
  • NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI

    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions.
    Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges.
    To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure.
    Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations.
    Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint.

    NVIDIA Omniverse Blueprint for Smart City AI 
    The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes:

    NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale.
    NVIDIA Cosmos to generate synthetic data at scale for post-training AI models.
    NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models.
    NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes.

    The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint.
    NVIDIA Partner Ecosystem Powers Smart Cities Worldwide
    The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own.
    SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning.
    This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management.
    Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption.

    The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second.
    Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events.
    To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second.

    Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance.
    Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases.
    The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems.

    Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins.
    Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%.

    Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance.
    Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities.
    Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents.
    Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    #nvidia #brings #physical #european #cities
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. #nvidia #brings #physical #european #cities
    BLOGS.NVIDIA.COM
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language models (VLMs) and large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization (VSS), helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. https://blogs.nvidia.com/wp-content/uploads/2025/06/01-Monaco-Akila.mp4 The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-K2K-Polermo-1600x900-1.mp4 Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. https://blogs.nvidia.com/wp-content/uploads/2025/06/03-Milestone.mp4 Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-Linker-Vision-1280x680-1.mp4 Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    Like
    Love
    Wow
    34
    0 Commentaires 0 Parts