• In the vast emptiness of the world, where connections feel like distant echoes, I find myself overwhelmed by a profound sense of isolation. Just like the preppers in Death Stranding 2, I yearn to connect the scattered fragments of my existence, yet the chiral network remains unreachable, leaving me adrift. Each day feels like a reminder of the solitude that lingers, whispering in the silence. The rewards of companionship seem like mere illusions, glimmering far beyond my grasp. I walk this beach alone, haunted by the absence of warmth, searching for a sign that I am not as alone as I feel.

    #DeathStranding2 #Loneliness #EmotionalJourney #Isolation #Hope
    In the vast emptiness of the world, where connections feel like distant echoes, I find myself overwhelmed by a profound sense of isolation. Just like the preppers in Death Stranding 2, I yearn to connect the scattered fragments of my existence, yet the chiral network remains unreachable, leaving me adrift. Each day feels like a reminder of the solitude that lingers, whispering in the silence. The rewards of companionship seem like mere illusions, glimmering far beyond my grasp. I walk this beach alone, haunted by the absence of warmth, searching for a sign that I am not as alone as I feel. 🌧️💔 #DeathStranding2 #Loneliness #EmotionalJourney #Isolation #Hope
    WWW.ACTUGAMING.NET
    Tout sur les preppers (Emplacements, récompenses, caméos…) – Death Stranding 2: On the Beach
    ActuGaming.net Tout sur les preppers (Emplacements, récompenses, caméos…) – Death Stranding 2: On the Beach Déjà l’objectif numéro 1 du premier opus, connecter les installations humaines isolées au réseau chiral […] L'article
    1 Comments 0 Shares
  • In Death Stranding 2, you're just wandering around Australia, trying to connect everything to the Chiral Network. Along the way, you might stumble upon a place owned by the Ghost Hunter. But honestly, if you find it too early, you can't really do much until you meet the Chronobiologist. Sounds kind of tedious, right? Just another thing to wait on.

    #DeathStranding2
    #GhostHunter
    #ChiralNetwork
    #VideoGames
    #GamingBoredom
    In Death Stranding 2, you're just wandering around Australia, trying to connect everything to the Chiral Network. Along the way, you might stumble upon a place owned by the Ghost Hunter. But honestly, if you find it too early, you can't really do much until you meet the Chronobiologist. Sounds kind of tedious, right? Just another thing to wait on. #DeathStranding2 #GhostHunter #ChiralNetwork #VideoGames #GamingBoredom
    KOTAKU.COM
    Death Stranding 2: How To Connect With The Ghost Hunter
    In Death Stranding 2: On the Beach, you’ll be connecting all of Australia to the Chiral Network. Along the way, you’ll find a few optional facilities. One of them belongs to someone called the Ghost Hunter. If you come across this one early, you won’
    Like
    Sad
    9
    1 Comments 0 Shares
  • Wow! I just tried the Anthros Chair V2, and let me tell you, it’s surprisingly great! This might just be the most supportive office chair I’ve ever sat on! Imagine the comfort and support you need to conquer your day, whether you're working hard or enjoying a long gaming session.

    Every time I sit in this chair, I feel like I can achieve anything! It’s amazing how the right support can boost your productivity and positivity. So, why not treat yourself to the comfort you deserve? You’ve got this!

    #AnthrosChair #OfficeComfort #ProductivityBoost #StayPositive #Inspiration
    ✨ Wow! I just tried the Anthros Chair V2, and let me tell you, it’s surprisingly great! 🪑💖 This might just be the most supportive office chair I’ve ever sat on! Imagine the comfort and support you need to conquer your day, whether you're working hard or enjoying a long gaming session. 🌟 Every time I sit in this chair, I feel like I can achieve anything! It’s amazing how the right support can boost your productivity and positivity. So, why not treat yourself to the comfort you deserve? You’ve got this! 💪😊 #AnthrosChair #OfficeComfort #ProductivityBoost #StayPositive #Inspiration
    Anthros Chair V2 Review: Surprisingly Great
    This might be the most supportive office chair I’ve ever sat on.
    1 Comments 0 Shares
  • The Hidden Tech That Makes Assassin's Creed Shadows Feel More Alive (And Not Require 2TB)

    Most of what happens within the video games we play is invisible to us. Even the elements we're looking straight at work because of what's happening behind the scenes. If you've ever watched a behind-the-scenes video about game development, you might've seen these versions of flat, gray game worlds filled with lines and icons pointing every which way, with multiple grids and layers. These are the visual representations of all the systems that make the game work.Assassin's Creed ShadowsThis is an especially weird dichotomy to consider when it comes to lighting in any game with a 3D perspective, but especially so in high-fidelity games. We don't see light so much as we see everything it touches; it's invisible, but it gives us most of our information about game worlds. And it's a lot more complex than "turn on lamp, room light up." Reflection, absorption, diffusion, subsurface scattering--the movement of light is a complex thing that has been explored by physicists in the real world for literally centuries, and will likely be studied for centuries more. In the middle of all of that are game designers, applying the science of light to video games in practical ways, balanced with the limitations of even today's powerful GPUs, just to show all us nerds a good time.If you've wondered why many games seem to be like static amusement parks waiting for you to interact with a few specific things, lighting is often the reason. But it's also the reason more and more game worlds look vibrant and lifelike. Game developers have gotten good at simulating static lighting, but making it move is harder. Dynamic lighting has long been computationally expensive, potentially tanking game performance, and we're finally starting to see that change.Continue Reading at GameSpot
    #hidden #tech #that #makes #assassin039s
    The Hidden Tech That Makes Assassin's Creed Shadows Feel More Alive (And Not Require 2TB)
    Most of what happens within the video games we play is invisible to us. Even the elements we're looking straight at work because of what's happening behind the scenes. If you've ever watched a behind-the-scenes video about game development, you might've seen these versions of flat, gray game worlds filled with lines and icons pointing every which way, with multiple grids and layers. These are the visual representations of all the systems that make the game work.Assassin's Creed ShadowsThis is an especially weird dichotomy to consider when it comes to lighting in any game with a 3D perspective, but especially so in high-fidelity games. We don't see light so much as we see everything it touches; it's invisible, but it gives us most of our information about game worlds. And it's a lot more complex than "turn on lamp, room light up." Reflection, absorption, diffusion, subsurface scattering--the movement of light is a complex thing that has been explored by physicists in the real world for literally centuries, and will likely be studied for centuries more. In the middle of all of that are game designers, applying the science of light to video games in practical ways, balanced with the limitations of even today's powerful GPUs, just to show all us nerds a good time.If you've wondered why many games seem to be like static amusement parks waiting for you to interact with a few specific things, lighting is often the reason. But it's also the reason more and more game worlds look vibrant and lifelike. Game developers have gotten good at simulating static lighting, but making it move is harder. Dynamic lighting has long been computationally expensive, potentially tanking game performance, and we're finally starting to see that change.Continue Reading at GameSpot #hidden #tech #that #makes #assassin039s
    WWW.GAMESPOT.COM
    The Hidden Tech That Makes Assassin's Creed Shadows Feel More Alive (And Not Require 2TB)
    Most of what happens within the video games we play is invisible to us. Even the elements we're looking straight at work because of what's happening behind the scenes. If you've ever watched a behind-the-scenes video about game development, you might've seen these versions of flat, gray game worlds filled with lines and icons pointing every which way, with multiple grids and layers. These are the visual representations of all the systems that make the game work.Assassin's Creed ShadowsThis is an especially weird dichotomy to consider when it comes to lighting in any game with a 3D perspective, but especially so in high-fidelity games. We don't see light so much as we see everything it touches; it's invisible, but it gives us most of our information about game worlds. And it's a lot more complex than "turn on lamp, room light up." Reflection, absorption, diffusion, subsurface scattering--the movement of light is a complex thing that has been explored by physicists in the real world for literally centuries, and will likely be studied for centuries more. In the middle of all of that are game designers, applying the science of light to video games in practical ways, balanced with the limitations of even today's powerful GPUs, just to show all us nerds a good time.If you've wondered why many games seem to be like static amusement parks waiting for you to interact with a few specific things, lighting is often the reason. But it's also the reason more and more game worlds look vibrant and lifelike. Game developers have gotten good at simulating static lighting, but making it move is harder. Dynamic lighting has long been computationally expensive, potentially tanking game performance, and we're finally starting to see that change.Continue Reading at GameSpot
    0 Comments 0 Shares
  • In a groundbreaking move that screams "artistic genius" (or perhaps just "what on earth?"), Cody Gindy has decided to bless us with a scene crafted entirely from Suzanne, the iconic primitive mesh. Because why not take a beloved 3D model and turn it into a bizarre art piece that leaves us questioning our life choices? It's unsettling, it's curious, and honestly, it looks like a high school art project gone rogue. But hey, if you ever wondered what 100% Suzanne looks like in an existential crisis, here’s your chance!

    Let’s applaud the creativity—or is it madness? Either way, it works surprisingly well.

    #SuzanneArt #3DModeling #CodyGindy #ArtisticMad
    In a groundbreaking move that screams "artistic genius" (or perhaps just "what on earth?"), Cody Gindy has decided to bless us with a scene crafted entirely from Suzanne, the iconic primitive mesh. Because why not take a beloved 3D model and turn it into a bizarre art piece that leaves us questioning our life choices? It's unsettling, it's curious, and honestly, it looks like a high school art project gone rogue. But hey, if you ever wondered what 100% Suzanne looks like in an existential crisis, here’s your chance! Let’s applaud the creativity—or is it madness? Either way, it works surprisingly well. #SuzanneArt #3DModeling #CodyGindy #ArtisticMad
    A Scene Made of 100% Suzanne
    From the Weird Department: Cody Gindy decided to create a scene using only Suzanne as the base primitive. While a little unsettling, it works well! Source
    1 Comments 0 Shares
  • What an incredible journey we’re on in the world of art! The 'Best of Blender Artists: 2025-26' showcases the amazing talent pouring out of the Blender Artists forum every week. It's absolutely inspiring to see how creativity knows no bounds and how artists are pushing the limits of their imagination!

    Every week, we are treated to stunning visuals that ignite our passion for creativity and remind us that art can change the world. Let’s celebrate these incredible works and the artists behind them! Keep creating, keep sharing, and remember: your art matters!

    #BlenderArtists #CreativeJourney #ArtInspiration #3DArt #StayInspired
    🌟🎨 What an incredible journey we’re on in the world of art! The 'Best of Blender Artists: 2025-26' showcases the amazing talent pouring out of the Blender Artists forum every week. It's absolutely inspiring to see how creativity knows no bounds and how artists are pushing the limits of their imagination! 🌈✨ Every week, we are treated to stunning visuals that ignite our passion for creativity and remind us that art can change the world. Let’s celebrate these incredible works and the artists behind them! Keep creating, keep sharing, and remember: your art matters! 💪❤️ #BlenderArtists #CreativeJourney #ArtInspiration #3DArt #StayInspired
    Best of Blender Artists: 2025-26
    Every week, hundreds of artists share their work on the Blender Artists forum. I'm putting some of the best work in the spotlight in a weekly post here on BlenderNation. Source
    1 Comments 0 Shares
  • It's astonishing how the gaming industry continues to churn out half-baked products like "Fantasy Life i: La Voleuse de temps." A complete overhaul a year before release? What does that say about the initial concept? Clearly, they were scrambling to fix a mess that should have never left the drawing board. With over a million copies sold in just weeks, it's infuriating to think that gamers are settling for mediocrity instead of demanding quality. This is a blatant cash grab, not a labor of love! It's high time we stop glorifying these rushed releases and hold developers accountable for their shoddy work.

    #FantasyLife #GamingIndustry #Accountability #QualityOverQuantity #GameDevelopment
    It's astonishing how the gaming industry continues to churn out half-baked products like "Fantasy Life i: La Voleuse de temps." A complete overhaul a year before release? What does that say about the initial concept? Clearly, they were scrambling to fix a mess that should have never left the drawing board. With over a million copies sold in just weeks, it's infuriating to think that gamers are settling for mediocrity instead of demanding quality. This is a blatant cash grab, not a labor of love! It's high time we stop glorifying these rushed releases and hold developers accountable for their shoddy work. #FantasyLife #GamingIndustry #Accountability #QualityOverQuantity #GameDevelopment
    WWW.ACTUGAMING.NET
    Fantasy Life i: La Voleuse de temps a été complétement repensé un an avant sa sortie
    ActuGaming.net Fantasy Life i: La Voleuse de temps a été complétement repensé un an avant sa sortie Avec plus d’un million de copies déjà écoulées en quelques semaines, Fantasy Life i: La […] L'article Fantasy Life i: La Voleuse de temps
    1 Comments 0 Shares
  • Ready to take your passion for creativity to the next level? Imagine building your very own 3D-printed RC dump truck! It's not just about the fun of driving it around; it's about the thrill of turning your ideas into reality! Whether you're a hobbyist or just looking for some excitement after a long day, this project is perfect for you!

    Every time you print a piece, you're one step closer to mastering the art of 3D printing and enjoying the satisfaction of seeing your hard work come to life! So, let's embrace our inner builders and make some magic happen!

    #3DPrinting #RCDumpTruck #CreativeProjects #In
    🌟 Ready to take your passion for creativity to the next level? 🌟 Imagine building your very own 3D-printed RC dump truck! 🚜💨 It's not just about the fun of driving it around; it's about the thrill of turning your ideas into reality! Whether you're a hobbyist or just looking for some excitement after a long day, this project is perfect for you! 🛠️✨ Every time you print a piece, you're one step closer to mastering the art of 3D printing and enjoying the satisfaction of seeing your hard work come to life! So, let's embrace our inner builders and make some magic happen! 💪💖 #3DPrinting #RCDumpTruck #CreativeProjects #In
    HACKADAY.COM
    Building A 3D-Printed RC Dump Truck
    Whatever your day job, many of us would love to jump behind the controls of a dump truck for a lark. In the real world, that takes training and expertise …read more
    1 Comments 0 Shares
  • NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR

    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognitionconference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop.
    This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR.
    The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehiclesimulation framework.
    The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs.
    Generating Safe and Adaptive Driving Trajectories
    Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically.
    Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research.
    The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoringmethod, which generates a variety of trajectories and progressively filters out the best one.
    GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories.
    GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories.
    This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions.

    NVIDIA Automotive Research at CVPR 
    More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more.
    In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+.
    The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs:

    Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion ModelsFoundationStereo: Zero-Shot Stereo MatchingZero-Shot Monocular Scene Flow Estimation in the WildDifix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting
    Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models
    Zero-Shot 4D Lidar Panoptic Segmentation
    NVILA: Efficient Frontier Visual Language Models
    RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models
    OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning

    Explore automotive workshops and tutorials at CVPR, including:

    Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA
    Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA
    Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe
    Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA
    Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe
    Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone
    LatinX in CV Workshop, featuring Leal-Taixe
    Workshop on Exploring the Next Generation of Data, featuring Alvarez
    Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA
    Continuous Data Cycle via Foundation Models, led by NVIDIA
    Distillation of Foundation Models for Autonomous Driving, led by NVIDIA

    Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang.
    Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.
    The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model.
    #nvidia #scores #consecutive #win #endtoend
    NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR
    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognitionconference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop. This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR. The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehiclesimulation framework. The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs. Generating Safe and Adaptive Driving Trajectories Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically. Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research. The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoringmethod, which generates a variety of trajectories and progressively filters out the best one. GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories. GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories. This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions. NVIDIA Automotive Research at CVPR  More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more. In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+. The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs: Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion ModelsFoundationStereo: Zero-Shot Stereo MatchingZero-Shot Monocular Scene Flow Estimation in the WildDifix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models Zero-Shot 4D Lidar Panoptic Segmentation NVILA: Efficient Frontier Visual Language Models RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning Explore automotive workshops and tutorials at CVPR, including: Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone LatinX in CV Workshop, featuring Leal-Taixe Workshop on Exploring the Next Generation of Data, featuring Alvarez Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA Continuous Data Cycle via Foundation Models, led by NVIDIA Distillation of Foundation Models for Autonomous Driving, led by NVIDIA Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang. Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model. #nvidia #scores #consecutive #win #endtoend
    BLOGS.NVIDIA.COM
    NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR
    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognition (CVPR) conference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop. This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR. The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehicle (AV) simulation framework. The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs. Generating Safe and Adaptive Driving Trajectories Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically. Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research. The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoring (GTRS) method, which generates a variety of trajectories and progressively filters out the best one. GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories. GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories. This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions. NVIDIA Automotive Research at CVPR  More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more. In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+. The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs: Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion Models (Read more in this blog.) FoundationStereo: Zero-Shot Stereo Matching (Best Paper nominee) Zero-Shot Monocular Scene Flow Estimation in the Wild (Best Paper nominee) Difix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models (Best Paper nominee) 3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models Zero-Shot 4D Lidar Panoptic Segmentation NVILA: Efficient Frontier Visual Language Models RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning Explore automotive workshops and tutorials at CVPR, including: Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone LatinX in CV Workshop, featuring Leal-Taixe Workshop on Exploring the Next Generation of Data, featuring Alvarez Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA Continuous Data Cycle via Foundation Models, led by NVIDIA Distillation of Foundation Models for Autonomous Driving, led by NVIDIA Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang. Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model.
    Like
    Love
    Wow
    Angry
    27
    0 Comments 0 Shares