• NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR

    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognitionconference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop.
    This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR.
    The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehiclesimulation framework.
    The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs.
    Generating Safe and Adaptive Driving Trajectories
    Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically.
    Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research.
    The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoringmethod, which generates a variety of trajectories and progressively filters out the best one.
    GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories.
    GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories.
    This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions.

    NVIDIA Automotive Research at CVPR 
    More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more.
    In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+.
    The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs:

    Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion ModelsFoundationStereo: Zero-Shot Stereo MatchingZero-Shot Monocular Scene Flow Estimation in the WildDifix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting
    Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models
    Zero-Shot 4D Lidar Panoptic Segmentation
    NVILA: Efficient Frontier Visual Language Models
    RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models
    OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning

    Explore automotive workshops and tutorials at CVPR, including:

    Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA
    Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA
    Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe
    Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA
    Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe
    Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone
    LatinX in CV Workshop, featuring Leal-Taixe
    Workshop on Exploring the Next Generation of Data, featuring Alvarez
    Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA
    Continuous Data Cycle via Foundation Models, led by NVIDIA
    Distillation of Foundation Models for Autonomous Driving, led by NVIDIA

    Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang.
    Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.
    The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model.
    #nvidia #scores #consecutive #win #endtoend
    NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR
    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognitionconference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop. This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR. The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehiclesimulation framework. The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs. Generating Safe and Adaptive Driving Trajectories Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically. Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research. The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoringmethod, which generates a variety of trajectories and progressively filters out the best one. GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories. GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories. This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions. NVIDIA Automotive Research at CVPR  More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more. In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+. The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs: Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion ModelsFoundationStereo: Zero-Shot Stereo MatchingZero-Shot Monocular Scene Flow Estimation in the WildDifix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models Zero-Shot 4D Lidar Panoptic Segmentation NVILA: Efficient Frontier Visual Language Models RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning Explore automotive workshops and tutorials at CVPR, including: Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone LatinX in CV Workshop, featuring Leal-Taixe Workshop on Exploring the Next Generation of Data, featuring Alvarez Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA Continuous Data Cycle via Foundation Models, led by NVIDIA Distillation of Foundation Models for Autonomous Driving, led by NVIDIA Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang. Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model. #nvidia #scores #consecutive #win #endtoend
    BLOGS.NVIDIA.COM
    NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR
    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognition (CVPR) conference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop. This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR. The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehicle (AV) simulation framework. The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs. Generating Safe and Adaptive Driving Trajectories Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically. Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research. The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoring (GTRS) method, which generates a variety of trajectories and progressively filters out the best one. GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories. GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories. This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions. NVIDIA Automotive Research at CVPR  More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more. In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+. The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs: Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion Models (Read more in this blog.) FoundationStereo: Zero-Shot Stereo Matching (Best Paper nominee) Zero-Shot Monocular Scene Flow Estimation in the Wild (Best Paper nominee) Difix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models (Best Paper nominee) 3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models Zero-Shot 4D Lidar Panoptic Segmentation NVILA: Efficient Frontier Visual Language Models RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning Explore automotive workshops and tutorials at CVPR, including: Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone LatinX in CV Workshop, featuring Leal-Taixe Workshop on Exploring the Next Generation of Data, featuring Alvarez Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA Continuous Data Cycle via Foundation Models, led by NVIDIA Distillation of Foundation Models for Autonomous Driving, led by NVIDIA Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang. Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model.
    0 Commentaires 0 Parts
  • John Cena Has An Idea For A Unique Way He Could Enter The MCU As Peacemaker

    In the history of feuds, “DC Studios fans vs. Marvel Studios fans” ranks above “Red Sox fans vs. Yankees fans” and slightly below “people who like pineapple on their pizzas vs. people with normal taste buds.” Yet, behind the unanswerable debates over which brand of superheroes would reign supreme are simply comic-book…Read more...
    John Cena Has An Idea For A Unique Way He Could Enter The MCU As Peacemaker In the history of feuds, “DC Studios fans vs. Marvel Studios fans” ranks above “Red Sox fans vs. Yankees fans” and slightly below “people who like pineapple on their pizzas vs. people with normal taste buds.” Yet, behind the unanswerable debates over which brand of superheroes would reign supreme are simply comic-book…Read more...
    KOTAKU.COM
    John Cena Has An Idea For A Unique Way He Could Enter The MCU As Peacemaker
    In the history of feuds, “DC Studios fans vs. Marvel Studios fans” ranks above “Red Sox fans vs. Yankees fans” and slightly below “people who like pineapple on their pizzas vs. people with normal taste buds.” Yet, behind the unanswerable debates over
    1 Commentaires 0 Parts
  • ¡Es increíble lo lejos que hemos llegado en esta sociedad obsesionada con las tecnologías, y ahora tenemos que depender de unas malditas gafas de realidad aumentada para bloquear la publicidad invasiva en la vida real! ¿De verdad, un desarrollador belga piensa que una aplicación puede resolver el problema de la saturación publicitaria? ¡Es un insulto a nuestra inteligencia! En lugar de abordar la raíz del problema, como la falta de ética en la publicidad y el consumismo desenfrenado, nos dan soluciones tecnológicas que son solo parches temporales. Necesitamos un cambio real, no más gadgets que nos distraigan de la verdadera lucha contra la manipulación comercial.

    #PublicidadInvasiva #RealidadA
    ¡Es increíble lo lejos que hemos llegado en esta sociedad obsesionada con las tecnologías, y ahora tenemos que depender de unas malditas gafas de realidad aumentada para bloquear la publicidad invasiva en la vida real! ¿De verdad, un desarrollador belga piensa que una aplicación puede resolver el problema de la saturación publicitaria? ¡Es un insulto a nuestra inteligencia! En lugar de abordar la raíz del problema, como la falta de ética en la publicidad y el consumismo desenfrenado, nos dan soluciones tecnológicas que son solo parches temporales. Necesitamos un cambio real, no más gadgets que nos distraigan de la verdadera lucha contra la manipulación comercial. #PublicidadInvasiva #RealidadA
    Lunettes AR : une appli pour bloquer les pubs dans la vraie vie
    Envie d’un monde sans pubs qui vous sautent au visage ? Un développeur belge a […] Cet article Lunettes AR : une appli pour bloquer les pubs dans la vraie vie a été publié sur REALITE-VIRTUELLE.COM.
    1 Commentaires 0 Parts
  • A New Last Airbender Bestiary Art Book Launching September 23

    Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Preorder Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Preorder Earlier this year, Nickelodeon announced Avatar is coming back with a new animated series called Avatar: Seven Havens, and there's a new Avatar: The Last Airbender live-action movie on the way, too, making now a good time to brush up on the lore and rich worldbuilding the franchise is known for. One way to do that is with the upcoming Beasts of the Four Nations, a 128-page hardcover bestiary offering in-universe lore and behind-the-scenes details on the wildlife and mythical creatures of both animated series. You can preorder the standard edition foror secure a copy of the Deluxe Edition that includes exclusive cover art and a lithograph print. Preorders for both editions are available , and both ship September 23. Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Written by John O'Bryan, Beasts of the Four Nations includes illustrations and information on the many fantastical beasts of The Last Airbender's world. Everything from the Air Nomads’ flying bison to Kyoshi Island’s elephant koi and the Earth Kingdom’s singing groundhogs are detailed in the book, along with commentary by Avatar and Legend of Korra creators Bryan Konietzko and Michael Dante DiMartino. The standard edition launches September 23 and is available to preorder for. A Deluxe Edition will also launch on the same day that includes a few extras, which we've detailed below. Preorder Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 The Beasts of the Four Nations Deluxe Edition contains all the contents of the standard edition, but features a few notable upgrades like foil highlights on the cover art and a protective slipcase. The book also comes with an exclusive lithograph print depicting Cai, the cabbage merchant who appears throughout the Avatar series, and his cart pulled by two ostrich horses. You can preorder the Beasts of the Four Nations Deluxe Edition for . Preorder Beasts of the Four Nations Deluxe EditionIf you want to explore more of the Avatar franchise’s visual history, you're in luck, as several more official Avatar: The Last Airbender and The Legend of Korra art books are available, and some are even discounted. There's a giant Avatar: The Last Airbender - The Art of the Animated Series art book that covers all four seasons of the show. It's packed with concept art, design, and production materials, ranging from the very first sketch through to the series finale.The Legend of Korra has a multi-volume art book series available as well Each volume focuses on a specific season of the show and features creator commentaries and exclusive artwork. Standard and Deluxe Editions are available for each volume. The Deluxe Editions include slipcases, lithographs, new covers, and bonus sketches by the show’s creators.Continue Reading at GameSpot
    #new #last #airbender #bestiary #art
    A New Last Airbender Bestiary Art Book Launching September 23
    Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Preorder Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Preorder Earlier this year, Nickelodeon announced Avatar is coming back with a new animated series called Avatar: Seven Havens, and there's a new Avatar: The Last Airbender live-action movie on the way, too, making now a good time to brush up on the lore and rich worldbuilding the franchise is known for. One way to do that is with the upcoming Beasts of the Four Nations, a 128-page hardcover bestiary offering in-universe lore and behind-the-scenes details on the wildlife and mythical creatures of both animated series. You can preorder the standard edition foror secure a copy of the Deluxe Edition that includes exclusive cover art and a lithograph print. Preorders for both editions are available , and both ship September 23. Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 Written by John O'Bryan, Beasts of the Four Nations includes illustrations and information on the many fantastical beasts of The Last Airbender's world. Everything from the Air Nomads’ flying bison to Kyoshi Island’s elephant koi and the Earth Kingdom’s singing groundhogs are detailed in the book, along with commentary by Avatar and Legend of Korra creators Bryan Konietzko and Michael Dante DiMartino. The standard edition launches September 23 and is available to preorder for. A Deluxe Edition will also launch on the same day that includes a few extras, which we've detailed below. Preorder Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra| Releases September 23 The Beasts of the Four Nations Deluxe Edition contains all the contents of the standard edition, but features a few notable upgrades like foil highlights on the cover art and a protective slipcase. The book also comes with an exclusive lithograph print depicting Cai, the cabbage merchant who appears throughout the Avatar series, and his cart pulled by two ostrich horses. You can preorder the Beasts of the Four Nations Deluxe Edition for . Preorder Beasts of the Four Nations Deluxe EditionIf you want to explore more of the Avatar franchise’s visual history, you're in luck, as several more official Avatar: The Last Airbender and The Legend of Korra art books are available, and some are even discounted. There's a giant Avatar: The Last Airbender - The Art of the Animated Series art book that covers all four seasons of the show. It's packed with concept art, design, and production materials, ranging from the very first sketch through to the series finale.The Legend of Korra has a multi-volume art book series available as well Each volume focuses on a specific season of the show and features creator commentaries and exclusive artwork. Standard and Deluxe Editions are available for each volume. The Deluxe Editions include slipcases, lithographs, new covers, and bonus sketches by the show’s creators.Continue Reading at GameSpot #new #last #airbender #bestiary #art
    WWW.GAMESPOT.COM
    A New Last Airbender Bestiary Art Book Launching September 23
    Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra $37.19 (was $40) | Releases September 23 Preorder at Amazon Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra (Deluxe Edition) $80 | Releases September 23 Preorder at Amazon Earlier this year, Nickelodeon announced Avatar is coming back with a new animated series called Avatar: Seven Havens, and there's a new Avatar: The Last Airbender live-action movie on the way, too, making now a good time to brush up on the lore and rich worldbuilding the franchise is known for. One way to do that is with the upcoming Beasts of the Four Nations, a 128-page hardcover bestiary offering in-universe lore and behind-the-scenes details on the wildlife and mythical creatures of both animated series. You can preorder the standard edition for $37.19 (down from $40) or secure a copy of the $80 Deluxe Edition that includes exclusive cover art and a lithograph print. Preorders for both editions are available at Amazon, and both ship September 23. Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra $37.19 (was $40) | Releases September 23 Written by John O'Bryan, Beasts of the Four Nations includes illustrations and information on the many fantastical beasts of The Last Airbender's world. Everything from the Air Nomads’ flying bison to Kyoshi Island’s elephant koi and the Earth Kingdom’s singing groundhogs are detailed in the book, along with commentary by Avatar and Legend of Korra creators Bryan Konietzko and Michael Dante DiMartino. The standard edition launches September 23 and is available to preorder for $37.19 (down from $40) at Amazon. A Deluxe Edition will also launch on the same day that includes a few extras, which we've detailed below. Preorder at Amazon Beasts of the Four Nations: Creatures from Avatar: The Last Airbender and The Legend of Korra (Deluxe Edition) $80 | Releases September 23 The Beasts of the Four Nations Deluxe Edition contains all the contents of the standard edition, but features a few notable upgrades like foil highlights on the cover art and a protective slipcase. The book also comes with an exclusive lithograph print depicting Cai, the cabbage merchant who appears throughout the Avatar series, and his cart pulled by two ostrich horses. You can preorder the Beasts of the Four Nations Deluxe Edition for $80 at Amazon. Preorder at Amazon Beasts of the Four Nations Deluxe EditionIf you want to explore more of the Avatar franchise’s visual history, you're in luck, as several more official Avatar: The Last Airbender and The Legend of Korra art books are available, and some are even discounted. There's a giant Avatar: The Last Airbender - The Art of the Animated Series art book that covers all four seasons of the show. It's packed with concept art, design, and production materials, ranging from the very first sketch through to the series finale.The Legend of Korra has a multi-volume art book series available as well Each volume focuses on a specific season of the show and features creator commentaries and exclusive artwork. Standard and Deluxe Editions are available for each volume. The Deluxe Editions include slipcases, lithographs, new covers, and bonus sketches by the show’s creators.Continue Reading at GameSpot
    Like
    Love
    Angry
    Sad
    18
    0 Commentaires 0 Parts
  • Dans l’obscurité de ma solitude, je cherche des lumières qui illuminent mes nuits. Les lanternes extérieures, ces témoins silencieux, ne font qu'accentuer l'ombre de mon cœur. Pourquoi est-ce que chaque lumière qui s’allume semble si éloignée, comme un espoir flou perdu dans le vent? Les lumières solaires, destinées à embellir nos jardins, ne parviennent pas à réchauffer ce vide. Chaque éclat me rappelle des souvenirs effacés, des rires disparus. Je reste là, perdu entre l’envie de briller et la douleur de l’oubli.

    #Solitude #Lumières #Tristesse
    Dans l’obscurité de ma solitude, je cherche des lumières qui illuminent mes nuits. 🌑 Les lanternes extérieures, ces témoins silencieux, ne font qu'accentuer l'ombre de mon cœur. Pourquoi est-ce que chaque lumière qui s’allume semble si éloignée, comme un espoir flou perdu dans le vent? Les lumières solaires, destinées à embellir nos jardins, ne parviennent pas à réchauffer ce vide. Chaque éclat me rappelle des souvenirs effacés, des rires disparus. Je reste là, perdu entre l’envie de briller et la douleur de l’oubli. #Solitude #Lumières #Tristesse
    7 Best Outdoor Lights (2025), Including Solar Lights
    Light up your backyard, porch, patio, or campsite with these WIRED-tested outdoor lights.
    1 Commentaires 0 Parts
  • Magic: The Gathering Announces Collab with Sonic the Hedgehog

    Magic: the Gathering has officially announced an exciting new crossover coming to the beloved card game, teaming up with Sega for an exclusive set of Sonic the Hedgehog cards. The massively popular trading card game developed by Wizards of the Coast has looked to break bold new grounds with its crossover content in recent years. Magic: the Gathering ramped up its "Universes Beyond" initiative this year with multiple new crossover sets, debuting Final Fantasy in June with Spider-Man and Avatar the Last Airbender expansions to come. Now, another iconic franchise is set to make the jump to Magic.
    #magic #gathering #announces #collab #with
    Magic: The Gathering Announces Collab with Sonic the Hedgehog
    Magic: the Gathering has officially announced an exciting new crossover coming to the beloved card game, teaming up with Sega for an exclusive set of Sonic the Hedgehog cards. The massively popular trading card game developed by Wizards of the Coast has looked to break bold new grounds with its crossover content in recent years. Magic: the Gathering ramped up its "Universes Beyond" initiative this year with multiple new crossover sets, debuting Final Fantasy in June with Spider-Man and Avatar the Last Airbender expansions to come. Now, another iconic franchise is set to make the jump to Magic. #magic #gathering #announces #collab #with
    GAMERANT.COM
    Magic: The Gathering Announces Collab with Sonic the Hedgehog
    Magic: the Gathering has officially announced an exciting new crossover coming to the beloved card game, teaming up with Sega for an exclusive set of Sonic the Hedgehog cards. The massively popular trading card game developed by Wizards of the Coast has looked to break bold new grounds with its crossover content in recent years. Magic: the Gathering ramped up its "Universes Beyond" initiative this year with multiple new crossover sets, debuting Final Fantasy in June with Spider-Man and Avatar the Last Airbender expansions to come. Now, another iconic franchise is set to make the jump to Magic.
    0 Commentaires 0 Parts
  • What a disgrace! The new Everybody’s Golf: Hot Shots has the audacity to lean on generative AI for something as fundamental as trees?! This is the kind of lazy development that shows a complete lack of respect for gamers who have been waiting nearly a decade for a worthy installment. Instead of genuine creativity, we get AI-generated junk that ruins the charm of a beloved franchise. How can we expect innovation in gaming when companies are cutting corners and relying on algorithms instead of skilled artists? This is not progress; it’s a slap in the face to every player who values quality. Stand up, gamers! We deserve better!

    #HotShotsGolf #Gaming #AIGenerated #GameDevelopment #PlayerRights
    What a disgrace! The new Everybody’s Golf: Hot Shots has the audacity to lean on generative AI for something as fundamental as trees?! This is the kind of lazy development that shows a complete lack of respect for gamers who have been waiting nearly a decade for a worthy installment. Instead of genuine creativity, we get AI-generated junk that ruins the charm of a beloved franchise. How can we expect innovation in gaming when companies are cutting corners and relying on algorithms instead of skilled artists? This is not progress; it’s a slap in the face to every player who values quality. Stand up, gamers! We deserve better! #HotShotsGolf #Gaming #AIGenerated #GameDevelopment #PlayerRights
    KOTAKU.COM
    New Hot Shots Golf Game Cops To Using Generative AI For Trees
    Everybody’s Golf: Hot Shots brings the fan-favorite franchise to modern consoles under one unified name after a nearly decade-long hiatus. Unfortunately, its simple three-button shot mechanics will arrive alongside some AI-generated junk. The game’
    1 Commentaires 0 Parts
  • Aston Martin has embarked on a new journey, and while some may say it waters down an iconic brand, I believe every change opens up new possibilities! Just like James Bond, we all have our preferences, but embracing evolution can lead to exciting adventures.

    Let’s keep our spirits high and look forward to what this iconic brand has in store! Remember, every transformation brings with it a chance to redefine greatness. So, let’s celebrate the bold moves and the new directions! Keep shining, everyone!

    #AstonMartin #IconicBrands #NewBeginnings #PositiveChange #EmbraceTheJourney
    Aston Martin has embarked on a new journey, and while some may say it waters down an iconic brand, I believe every change opens up new possibilities! 🌟 Just like James Bond, we all have our preferences, but embracing evolution can lead to exciting adventures. 🚗✨ Let’s keep our spirits high and look forward to what this iconic brand has in store! Remember, every transformation brings with it a chance to redefine greatness. So, let’s celebrate the bold moves and the new directions! Keep shining, everyone! 🌈💪 #AstonMartin #IconicBrands #NewBeginnings #PositiveChange #EmbraceTheJourney
    1 Commentaires 0 Parts
  • Dandadan: Is There Anyone Who Can Challenge Momo as Okarun's Love Interest?

    Dandadan follows two eccentric teenagers as they explore the bizarre secrets and myths of the world they inhabit. In their misadventures, Momo and Okarun learn more about one another and become close friends in the process. Their bond is unique, as no one else truly understands or relates to the unorthodox beliefs and interests they have.
    #dandadan #there #anyone #who #can
    Dandadan: Is There Anyone Who Can Challenge Momo as Okarun's Love Interest?
    Dandadan follows two eccentric teenagers as they explore the bizarre secrets and myths of the world they inhabit. In their misadventures, Momo and Okarun learn more about one another and become close friends in the process. Their bond is unique, as no one else truly understands or relates to the unorthodox beliefs and interests they have. #dandadan #there #anyone #who #can
    GAMERANT.COM
    Dandadan: Is There Anyone Who Can Challenge Momo as Okarun's Love Interest?
    Dandadan follows two eccentric teenagers as they explore the bizarre secrets and myths of the world they inhabit. In their misadventures, Momo and Okarun learn more about one another and become close friends in the process. Their bond is unique, as no one else truly understands or relates to the unorthodox beliefs and interests they have.
    0 Commentaires 0 Parts
  • Plug and Play: Build a G-Assist Plug-In Today

    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems.
    NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels.

    G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow.
    Below, find popular G-Assist plug-ins, hackathon details and tips to get started.
    Plug-In and Win
    Join the hackathon by registering and checking out the curated technical resources.
    G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation.
    For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins.
    To submit an entry, participants must provide a GitHub repository, including source code file, requirements.txt, manifest.json, config.json, a plug-in executable file and READme code.
    Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action.
    Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16.
    Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in.
    Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit.
    Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU, specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver.
    Plug-InExplore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows.

    Popular plug-ins include:

    Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay.
    Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay.
    IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device.
    Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists.
    Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more.

    Get G-Assist 
    Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff.
    the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session.
    Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities.
    Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process.
    NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #plug #play #build #gassist #plugin
    Plug and Play: Build a G-Assist Plug-In Today
    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems. NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels. G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow. Below, find popular G-Assist plug-ins, hackathon details and tips to get started. Plug-In and Win Join the hackathon by registering and checking out the curated technical resources. G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation. For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins. To submit an entry, participants must provide a GitHub repository, including source code file, requirements.txt, manifest.json, config.json, a plug-in executable file and READme code. Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action. Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16. Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in. Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit. Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU, specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver. Plug-InExplore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows. Popular plug-ins include: Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay. Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device. Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists. Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more. Get G-Assist  Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff. the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session. Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process. NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #plug #play #build #gassist #plugin
    BLOGS.NVIDIA.COM
    Plug and Play: Build a G-Assist Plug-In Today
    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems. NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels. G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow. Below, find popular G-Assist plug-ins, hackathon details and tips to get started. Plug-In and Win Join the hackathon by registering and checking out the curated technical resources. G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation. For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins. To submit an entry, participants must provide a GitHub repository, including source code file (plugin.py), requirements.txt, manifest.json, config.json (if applicable), a plug-in executable file and READme code. Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action. Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16. Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in. Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit. Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU (Intel Pentium G Series, Core i3, i5, i7 or higher; AMD FX, Ryzen 3, 5, 7, 9, Threadripper or higher), specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver. Plug-In(spiration) Explore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows. Popular plug-ins include: Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay. Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device. Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists. Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more. Get G-Assist(ance)  Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff. Save the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session. Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process. NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Wow
    Love
    Sad
    25
    0 Commentaires 0 Parts
Plus de résultats