• Animal Crossing Complete Strategy Guide Is Still Available At Amazon

    Still tending to your island in Animal Crossing: New Horizons? Then it might be worth picking up the Animal Crossing: New Horizons Official Complete Guide. The hardcover guide is still available --and it’s even seeing a slight discount right now. Best of all, this is the updated version published in 2023, meaning it includes details for all the major updates and the Happy Home Paradise expansion. Animal Crossing: New Horizons Official Complete GuidePublished by Future Press, this comprehensive 668-page guide covers everything you need to know about the game, including information on all the islanders, all the craftable items, and every collectible from seasonal events, updates, and DLC. There’s also a section covering unique island designs--so if you need inspiration for your next big project, you’ll find plenty of examples in this official guidebook.The original version of this guide was published in 2020. While the 2020 edition is still available, we recommend the updated 2023 edition, as it includes information on all the additional content released between 2020 and 2023--such as Happy Home Paradise--and is a much better fit for anyone playing New Horizons in 2025. See Future Press is also responsible for the new Metaphor: ReFantazio strategy guide and the popular Elden Ring strategy guides, along with dozens of other titles. If you’re interested in rounding out your bookcase with premium video game books, be sure to check out the full collection.Folks who haven’t yet purchased Animal Crossing: New Horizons will find it on sale for justat Woot--an Amazon company. If that deal sells out, it’s also discounted to .Continue Reading at GameSpot
    #animal #crossing #complete #strategy #guide
    Animal Crossing Complete Strategy Guide Is Still Available At Amazon
    Still tending to your island in Animal Crossing: New Horizons? Then it might be worth picking up the Animal Crossing: New Horizons Official Complete Guide. The hardcover guide is still available --and it’s even seeing a slight discount right now. Best of all, this is the updated version published in 2023, meaning it includes details for all the major updates and the Happy Home Paradise expansion. Animal Crossing: New Horizons Official Complete GuidePublished by Future Press, this comprehensive 668-page guide covers everything you need to know about the game, including information on all the islanders, all the craftable items, and every collectible from seasonal events, updates, and DLC. There’s also a section covering unique island designs--so if you need inspiration for your next big project, you’ll find plenty of examples in this official guidebook.The original version of this guide was published in 2020. While the 2020 edition is still available, we recommend the updated 2023 edition, as it includes information on all the additional content released between 2020 and 2023--such as Happy Home Paradise--and is a much better fit for anyone playing New Horizons in 2025. See Future Press is also responsible for the new Metaphor: ReFantazio strategy guide and the popular Elden Ring strategy guides, along with dozens of other titles. If you’re interested in rounding out your bookcase with premium video game books, be sure to check out the full collection.Folks who haven’t yet purchased Animal Crossing: New Horizons will find it on sale for justat Woot--an Amazon company. If that deal sells out, it’s also discounted to .Continue Reading at GameSpot #animal #crossing #complete #strategy #guide
    WWW.GAMESPOT.COM
    Animal Crossing Complete Strategy Guide Is Still Available At Amazon
    Still tending to your island in Animal Crossing: New Horizons? Then it might be worth picking up the Animal Crossing: New Horizons Official Complete Guide. The hardcover guide is still available at Amazon--and it’s even seeing a slight discount right now. Best of all, this is the updated version published in 2023, meaning it includes details for all the major updates and the Happy Home Paradise expansion. Animal Crossing: New Horizons Official Complete Guide $50 (was $55) Published by Future Press, this comprehensive 668-page guide covers everything you need to know about the game, including information on all the islanders, all the craftable items, and every collectible from seasonal events, updates, and DLC. There’s also a section covering unique island designs--so if you need inspiration for your next big project, you’ll find plenty of examples in this official guidebook.The original version of this guide was published in 2020. While the 2020 edition is still available, we recommend the updated 2023 edition, as it includes information on all the additional content released between 2020 and 2023--such as Happy Home Paradise--and is a much better fit for anyone playing New Horizons in 2025. See at Amazon Future Press is also responsible for the new Metaphor: ReFantazio strategy guide and the popular Elden Ring strategy guides, along with dozens of other titles. If you’re interested in rounding out your bookcase with premium video game books, be sure to check out the full collection.Folks who haven’t yet purchased Animal Crossing: New Horizons will find it on sale for just $40 (was $60) at Woot--an Amazon company. If that deal sells out, it’s also discounted to $52 at Amazon.Continue Reading at GameSpot
    0 Комментарии 0 Поделились
  • NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR

    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognitionconference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop.
    This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR.
    The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehiclesimulation framework.
    The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs.
    Generating Safe and Adaptive Driving Trajectories
    Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically.
    Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research.
    The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoringmethod, which generates a variety of trajectories and progressively filters out the best one.
    GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories.
    GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories.
    This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions.

    NVIDIA Automotive Research at CVPR 
    More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more.
    In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+.
    The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs:

    Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion ModelsFoundationStereo: Zero-Shot Stereo MatchingZero-Shot Monocular Scene Flow Estimation in the WildDifix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting
    Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models
    Zero-Shot 4D Lidar Panoptic Segmentation
    NVILA: Efficient Frontier Visual Language Models
    RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models
    OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning

    Explore automotive workshops and tutorials at CVPR, including:

    Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA
    Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA
    Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe
    Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA
    Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe
    Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone
    LatinX in CV Workshop, featuring Leal-Taixe
    Workshop on Exploring the Next Generation of Data, featuring Alvarez
    Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA
    Continuous Data Cycle via Foundation Models, led by NVIDIA
    Distillation of Foundation Models for Autonomous Driving, led by NVIDIA

    Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang.
    Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.
    The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model.
    #nvidia #scores #consecutive #win #endtoend
    NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR
    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognitionconference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop. This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR. The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehiclesimulation framework. The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs. Generating Safe and Adaptive Driving Trajectories Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically. Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research. The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoringmethod, which generates a variety of trajectories and progressively filters out the best one. GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories. GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories. This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions. NVIDIA Automotive Research at CVPR  More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more. In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+. The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs: Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion ModelsFoundationStereo: Zero-Shot Stereo MatchingZero-Shot Monocular Scene Flow Estimation in the WildDifix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models Zero-Shot 4D Lidar Panoptic Segmentation NVILA: Efficient Frontier Visual Language Models RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning Explore automotive workshops and tutorials at CVPR, including: Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone LatinX in CV Workshop, featuring Leal-Taixe Workshop on Exploring the Next Generation of Data, featuring Alvarez Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA Continuous Data Cycle via Foundation Models, led by NVIDIA Distillation of Foundation Models for Autonomous Driving, led by NVIDIA Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang. Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model. #nvidia #scores #consecutive #win #endtoend
    BLOGS.NVIDIA.COM
    NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR
    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognition (CVPR) conference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop. This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR. The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehicle (AV) simulation framework. The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs. Generating Safe and Adaptive Driving Trajectories Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically. Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research. The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoring (GTRS) method, which generates a variety of trajectories and progressively filters out the best one. GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories. GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories. This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions. NVIDIA Automotive Research at CVPR  More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more. In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+. The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs: Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion Models (Read more in this blog.) FoundationStereo: Zero-Shot Stereo Matching (Best Paper nominee) Zero-Shot Monocular Scene Flow Estimation in the Wild (Best Paper nominee) Difix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models (Best Paper nominee) 3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models Zero-Shot 4D Lidar Panoptic Segmentation NVILA: Efficient Frontier Visual Language Models RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning Explore automotive workshops and tutorials at CVPR, including: Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone LatinX in CV Workshop, featuring Leal-Taixe Workshop on Exploring the Next Generation of Data, featuring Alvarez Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA Continuous Data Cycle via Foundation Models, led by NVIDIA Distillation of Foundation Models for Autonomous Driving, led by NVIDIA Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang. Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model.
    Like
    Love
    Wow
    Angry
    27
    0 Комментарии 0 Поделились
  • Retail Reboot: Major Global Brands Transform End-to-End Operations With NVIDIA

    AI is packing and shipping efficiency for the retail and consumer packaged goodsindustries, with a majority of surveyed companies in the space reporting the technology is increasing revenue and reducing operational costs.
    Global brands are reimagining every facet of their businesses with AI, from how products are designed and manufactured to how they’re marketed, shipped and experienced in-store and online.
    At NVIDIA GTC Paris at VivaTech, industry leaders including L’Oréal, LVMH and Nestlé shared how they’re using tools like AI agents and physical AI — powered by NVIDIA AI and simulation technologies — across every step of the product lifecycle to enhance operations and experiences for partners, customers and employees.
    3D Digital Twins and AI Transform Marketing, Advertising and Product Design
    The meeting of generative AI and 3D product digital twins results in unlimited creative potential.
    Nestlé, the world’s largest food and beverage company, today announced a collaboration with NVIDIA and Accenture to launch a new, AI-powered in-house service that will create high-quality product content at scale for e-commerce and digital media channels.
    The new content service, based on digital twins powered by the NVIDIA Omniverse platform, creates exact 3D virtual replicas of physical products. Product packaging can be adjusted or localized digitally, enabling seamless integration into various environments, such as seasonal campaigns or channel-specific formats. This means that new creative content can be generated without having to constantly reshoot from scratch.
    Image courtesy of Nestlé
    The service is developed in partnership with Accenture Song, using Accenture AI Refinery built on NVIDIA Omniverse for advanced digital twin creation. It uses NVIDIA AI Enterprise for generative AI, hosted on Microsoft Azure for robust cloud infrastructure.
    Nestlé already has a baseline of 4,000 3D digital products — mainly for global brands — with the ambition to convert a total of 10,000 products into digital twins in the next two years across global and local brands.
    LVMH, the world’s leading luxury goods company, home to 75 distinguished maisons, is bringing 3D digital twins to its content production processes through its wine and spirits division, Moët Hennessy.
    The group partnered with content configuration engine Grip to develop a solution using the NVIDIA Omniverse platform, which enables the creation of 3D digital twins that power content variation production. With Grip’s solution, Moët Hennessy teams can quickly generate digital marketing assets and experiences to promote luxury products at scale.
    The initiative, led by Capucine Lafarge and Chloé Fournier, has been recognized by LVMH as a leading approach to scaling content creation.
    Image courtesy of Grip
    L’Oréal Gives Marketing and Online Shopping an AI Makeover
    Innovation starts at the drawing board. Today, that board is digital — and it’s powered by AI.
    L’Oréal Groupe, the world’s leading beauty player, announced its collaboration with NVIDIA today. Through this collaboration, L’Oréal and its partner ecosystem will leverage the NVIDIA AI Enterprise platform to transform its consumer beauty experiences, marketing and advertising content pipelines.
    “AI doesn’t think with the same constraints as a human being. That opens new avenues for creativity,” said Anne Machet, global head of content and entertainment at L’Oréal. “Generative AI enables our teams and partner agencies to explore creative possibilities.”
    CreAItech, L’Oréal’s generative AI content platform, is augmenting the creativity of marketing and content teams. Combining a modular ecosystem of models, expertise, technologies and partners — including NVIDIA — CreAltech empowers marketers to generate thousands of unique, on-brand images, videos and lines of text for diverse platforms and global audiences.
    The solution empowers L’Oréal’s marketing teams to quickly iterate on campaigns that improve consumer engagement across social media, e-commerce content and influencer marketing — driving higher conversion rates.

    Noli.com, the first AI-powered multi-brand marketplace startup founded and backed by the  L’Oréal Groupe, is reinventing how people discover and shop for beauty products.
    Noli’s AI Beauty Matchmaker experience uses L’Oréal Groupe’s century-long expertise in beauty, including its extensive knowledge of beauty science, beauty tech and consumer insights, built from over 1 million skin data points and analysis of thousands of product formulations. It gives users a BeautyDNA profile with expert-level guidance and personalized product recommendations for skincare and haircare.
    “Beauty shoppers are often overwhelmed by choice and struggling to find the products that are right for them,” said Amos Susskind, founder and CEO of Noli. “By applying the latest AI models accelerated by NVIDIA and Accenture to the unparalleled knowledge base and expertise of the L’Oréal Groupe, we can provide hyper-personalized, explainable recommendations to our users.” 

    The Accenture AI Refinery, powered by NVIDIA AI Enterprise, will provide the platform for Noli to experiment and scale. Noli’s new agent models will use NVIDIA NIM and NVIDIA NeMo microservices, including NeMo Retriever, running on Microsoft Azure.
    Rapid Innovation With the NVIDIA Partner Ecosystem
    NVIDIA’s ecosystem of solution provider partners empowers retail and CPG companies to innovate faster, personalize customer experiences, and optimize operations with NVIDIA accelerated computing and AI.
    Global digital agency Monks is reshaping the landscape of AI-driven marketing, creative production and enterprise transformation. At the heart of their innovation lies the Monks.Flow platform that enhances both the speed and sophistication of creative workflows through NVIDIA Omniverse, NVIDIA NIM microservices and Triton Inference Server for lightning-fast inference.
    AI image solutions provider Bria is helping retail giants like Lidl and L’Oreal to enhance marketing asset creation. Bria AI transforms static product images into compelling, dynamic advertisements that can be quickly scaled for use across any marketing need.
    The company’s generative AI platform uses NVIDIA Triton Inference Server software and the NVIDIA TensorRT software development kit for accelerated inference, as well as NVIDIA NIM and NeMo microservices for quick image generation at scale.
    Physical AI Brings Acceleration to Supply Chain and Logistics
    AI’s impact extends far beyond the digital world. Physical AI-powered warehousing robots, for example, are helping maximize efficiency in retail supply chain operations. Four in five retail companies have reported that AI has helped reduce supply chain operational costs, with 25% reporting cost reductions of at least 10%.
    Technology providers Lyric, KoiReader Technologies and Exotec are tackling the challenges of integrating AI into complex warehouse environments.
    Lyric is using the NVIDIA cuOpt GPU-accelerated solver for warehouse network planning and route optimization, and is collaborating with NVIDIA to apply the technology to broader supply chain decision-making problems. KoiReader Technologies is tapping the NVIDIA Metropolis stack for its computer vision solutions within logistics, supply chain and manufacturing environments using the KoiVision Platform. And Exotec is using NVIDIA CUDA libraries and the NVIDIA JetPack software development kit for embedded robotic systems in warehouse and distribution centers.
    From real-time robotics orchestration to predictive maintenance, these solutions are delivering impact on uptime, throughput and cost savings for supply chain operations.
    Learn more by joining a follow-up discussion on digital twins and AI-powered creativity with Microsoft, Nestlé, Accenture and NVIDIA at Cannes Lions on Monday, June 16.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    #retail #reboot #major #global #brands
    Retail Reboot: Major Global Brands Transform End-to-End Operations With NVIDIA
    AI is packing and shipping efficiency for the retail and consumer packaged goodsindustries, with a majority of surveyed companies in the space reporting the technology is increasing revenue and reducing operational costs. Global brands are reimagining every facet of their businesses with AI, from how products are designed and manufactured to how they’re marketed, shipped and experienced in-store and online. At NVIDIA GTC Paris at VivaTech, industry leaders including L’Oréal, LVMH and Nestlé shared how they’re using tools like AI agents and physical AI — powered by NVIDIA AI and simulation technologies — across every step of the product lifecycle to enhance operations and experiences for partners, customers and employees. 3D Digital Twins and AI Transform Marketing, Advertising and Product Design The meeting of generative AI and 3D product digital twins results in unlimited creative potential. Nestlé, the world’s largest food and beverage company, today announced a collaboration with NVIDIA and Accenture to launch a new, AI-powered in-house service that will create high-quality product content at scale for e-commerce and digital media channels. The new content service, based on digital twins powered by the NVIDIA Omniverse platform, creates exact 3D virtual replicas of physical products. Product packaging can be adjusted or localized digitally, enabling seamless integration into various environments, such as seasonal campaigns or channel-specific formats. This means that new creative content can be generated without having to constantly reshoot from scratch. Image courtesy of Nestlé The service is developed in partnership with Accenture Song, using Accenture AI Refinery built on NVIDIA Omniverse for advanced digital twin creation. It uses NVIDIA AI Enterprise for generative AI, hosted on Microsoft Azure for robust cloud infrastructure. Nestlé already has a baseline of 4,000 3D digital products — mainly for global brands — with the ambition to convert a total of 10,000 products into digital twins in the next two years across global and local brands. LVMH, the world’s leading luxury goods company, home to 75 distinguished maisons, is bringing 3D digital twins to its content production processes through its wine and spirits division, Moët Hennessy. The group partnered with content configuration engine Grip to develop a solution using the NVIDIA Omniverse platform, which enables the creation of 3D digital twins that power content variation production. With Grip’s solution, Moët Hennessy teams can quickly generate digital marketing assets and experiences to promote luxury products at scale. The initiative, led by Capucine Lafarge and Chloé Fournier, has been recognized by LVMH as a leading approach to scaling content creation. Image courtesy of Grip L’Oréal Gives Marketing and Online Shopping an AI Makeover Innovation starts at the drawing board. Today, that board is digital — and it’s powered by AI. L’Oréal Groupe, the world’s leading beauty player, announced its collaboration with NVIDIA today. Through this collaboration, L’Oréal and its partner ecosystem will leverage the NVIDIA AI Enterprise platform to transform its consumer beauty experiences, marketing and advertising content pipelines. “AI doesn’t think with the same constraints as a human being. That opens new avenues for creativity,” said Anne Machet, global head of content and entertainment at L’Oréal. “Generative AI enables our teams and partner agencies to explore creative possibilities.” CreAItech, L’Oréal’s generative AI content platform, is augmenting the creativity of marketing and content teams. Combining a modular ecosystem of models, expertise, technologies and partners — including NVIDIA — CreAltech empowers marketers to generate thousands of unique, on-brand images, videos and lines of text for diverse platforms and global audiences. The solution empowers L’Oréal’s marketing teams to quickly iterate on campaigns that improve consumer engagement across social media, e-commerce content and influencer marketing — driving higher conversion rates. Noli.com, the first AI-powered multi-brand marketplace startup founded and backed by the  L’Oréal Groupe, is reinventing how people discover and shop for beauty products. Noli’s AI Beauty Matchmaker experience uses L’Oréal Groupe’s century-long expertise in beauty, including its extensive knowledge of beauty science, beauty tech and consumer insights, built from over 1 million skin data points and analysis of thousands of product formulations. It gives users a BeautyDNA profile with expert-level guidance and personalized product recommendations for skincare and haircare. “Beauty shoppers are often overwhelmed by choice and struggling to find the products that are right for them,” said Amos Susskind, founder and CEO of Noli. “By applying the latest AI models accelerated by NVIDIA and Accenture to the unparalleled knowledge base and expertise of the L’Oréal Groupe, we can provide hyper-personalized, explainable recommendations to our users.”  The Accenture AI Refinery, powered by NVIDIA AI Enterprise, will provide the platform for Noli to experiment and scale. Noli’s new agent models will use NVIDIA NIM and NVIDIA NeMo microservices, including NeMo Retriever, running on Microsoft Azure. Rapid Innovation With the NVIDIA Partner Ecosystem NVIDIA’s ecosystem of solution provider partners empowers retail and CPG companies to innovate faster, personalize customer experiences, and optimize operations with NVIDIA accelerated computing and AI. Global digital agency Monks is reshaping the landscape of AI-driven marketing, creative production and enterprise transformation. At the heart of their innovation lies the Monks.Flow platform that enhances both the speed and sophistication of creative workflows through NVIDIA Omniverse, NVIDIA NIM microservices and Triton Inference Server for lightning-fast inference. AI image solutions provider Bria is helping retail giants like Lidl and L’Oreal to enhance marketing asset creation. Bria AI transforms static product images into compelling, dynamic advertisements that can be quickly scaled for use across any marketing need. The company’s generative AI platform uses NVIDIA Triton Inference Server software and the NVIDIA TensorRT software development kit for accelerated inference, as well as NVIDIA NIM and NeMo microservices for quick image generation at scale. Physical AI Brings Acceleration to Supply Chain and Logistics AI’s impact extends far beyond the digital world. Physical AI-powered warehousing robots, for example, are helping maximize efficiency in retail supply chain operations. Four in five retail companies have reported that AI has helped reduce supply chain operational costs, with 25% reporting cost reductions of at least 10%. Technology providers Lyric, KoiReader Technologies and Exotec are tackling the challenges of integrating AI into complex warehouse environments. Lyric is using the NVIDIA cuOpt GPU-accelerated solver for warehouse network planning and route optimization, and is collaborating with NVIDIA to apply the technology to broader supply chain decision-making problems. KoiReader Technologies is tapping the NVIDIA Metropolis stack for its computer vision solutions within logistics, supply chain and manufacturing environments using the KoiVision Platform. And Exotec is using NVIDIA CUDA libraries and the NVIDIA JetPack software development kit for embedded robotic systems in warehouse and distribution centers. From real-time robotics orchestration to predictive maintenance, these solutions are delivering impact on uptime, throughput and cost savings for supply chain operations. Learn more by joining a follow-up discussion on digital twins and AI-powered creativity with Microsoft, Nestlé, Accenture and NVIDIA at Cannes Lions on Monday, June 16. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. #retail #reboot #major #global #brands
    BLOGS.NVIDIA.COM
    Retail Reboot: Major Global Brands Transform End-to-End Operations With NVIDIA
    AI is packing and shipping efficiency for the retail and consumer packaged goods (CPG) industries, with a majority of surveyed companies in the space reporting the technology is increasing revenue and reducing operational costs. Global brands are reimagining every facet of their businesses with AI, from how products are designed and manufactured to how they’re marketed, shipped and experienced in-store and online. At NVIDIA GTC Paris at VivaTech, industry leaders including L’Oréal, LVMH and Nestlé shared how they’re using tools like AI agents and physical AI — powered by NVIDIA AI and simulation technologies — across every step of the product lifecycle to enhance operations and experiences for partners, customers and employees. 3D Digital Twins and AI Transform Marketing, Advertising and Product Design The meeting of generative AI and 3D product digital twins results in unlimited creative potential. Nestlé, the world’s largest food and beverage company, today announced a collaboration with NVIDIA and Accenture to launch a new, AI-powered in-house service that will create high-quality product content at scale for e-commerce and digital media channels. The new content service, based on digital twins powered by the NVIDIA Omniverse platform, creates exact 3D virtual replicas of physical products. Product packaging can be adjusted or localized digitally, enabling seamless integration into various environments, such as seasonal campaigns or channel-specific formats. This means that new creative content can be generated without having to constantly reshoot from scratch. Image courtesy of Nestlé The service is developed in partnership with Accenture Song, using Accenture AI Refinery built on NVIDIA Omniverse for advanced digital twin creation. It uses NVIDIA AI Enterprise for generative AI, hosted on Microsoft Azure for robust cloud infrastructure. Nestlé already has a baseline of 4,000 3D digital products — mainly for global brands — with the ambition to convert a total of 10,000 products into digital twins in the next two years across global and local brands. LVMH, the world’s leading luxury goods company, home to 75 distinguished maisons, is bringing 3D digital twins to its content production processes through its wine and spirits division, Moët Hennessy. The group partnered with content configuration engine Grip to develop a solution using the NVIDIA Omniverse platform, which enables the creation of 3D digital twins that power content variation production. With Grip’s solution, Moët Hennessy teams can quickly generate digital marketing assets and experiences to promote luxury products at scale. The initiative, led by Capucine Lafarge and Chloé Fournier, has been recognized by LVMH as a leading approach to scaling content creation. Image courtesy of Grip L’Oréal Gives Marketing and Online Shopping an AI Makeover Innovation starts at the drawing board. Today, that board is digital — and it’s powered by AI. L’Oréal Groupe, the world’s leading beauty player, announced its collaboration with NVIDIA today. Through this collaboration, L’Oréal and its partner ecosystem will leverage the NVIDIA AI Enterprise platform to transform its consumer beauty experiences, marketing and advertising content pipelines. “AI doesn’t think with the same constraints as a human being. That opens new avenues for creativity,” said Anne Machet, global head of content and entertainment at L’Oréal. “Generative AI enables our teams and partner agencies to explore creative possibilities.” CreAItech, L’Oréal’s generative AI content platform, is augmenting the creativity of marketing and content teams. Combining a modular ecosystem of models, expertise, technologies and partners — including NVIDIA — CreAltech empowers marketers to generate thousands of unique, on-brand images, videos and lines of text for diverse platforms and global audiences. The solution empowers L’Oréal’s marketing teams to quickly iterate on campaigns that improve consumer engagement across social media, e-commerce content and influencer marketing — driving higher conversion rates. Noli.com, the first AI-powered multi-brand marketplace startup founded and backed by the  L’Oréal Groupe, is reinventing how people discover and shop for beauty products. Noli’s AI Beauty Matchmaker experience uses L’Oréal Groupe’s century-long expertise in beauty, including its extensive knowledge of beauty science, beauty tech and consumer insights, built from over 1 million skin data points and analysis of thousands of product formulations. It gives users a BeautyDNA profile with expert-level guidance and personalized product recommendations for skincare and haircare. “Beauty shoppers are often overwhelmed by choice and struggling to find the products that are right for them,” said Amos Susskind, founder and CEO of Noli. “By applying the latest AI models accelerated by NVIDIA and Accenture to the unparalleled knowledge base and expertise of the L’Oréal Groupe, we can provide hyper-personalized, explainable recommendations to our users.”  https://blogs.nvidia.com/wp-content/uploads/2025/06/Noli_Demo.mp4 The Accenture AI Refinery, powered by NVIDIA AI Enterprise, will provide the platform for Noli to experiment and scale. Noli’s new agent models will use NVIDIA NIM and NVIDIA NeMo microservices, including NeMo Retriever, running on Microsoft Azure. Rapid Innovation With the NVIDIA Partner Ecosystem NVIDIA’s ecosystem of solution provider partners empowers retail and CPG companies to innovate faster, personalize customer experiences, and optimize operations with NVIDIA accelerated computing and AI. Global digital agency Monks is reshaping the landscape of AI-driven marketing, creative production and enterprise transformation. At the heart of their innovation lies the Monks.Flow platform that enhances both the speed and sophistication of creative workflows through NVIDIA Omniverse, NVIDIA NIM microservices and Triton Inference Server for lightning-fast inference. AI image solutions provider Bria is helping retail giants like Lidl and L’Oreal to enhance marketing asset creation. Bria AI transforms static product images into compelling, dynamic advertisements that can be quickly scaled for use across any marketing need. The company’s generative AI platform uses NVIDIA Triton Inference Server software and the NVIDIA TensorRT software development kit for accelerated inference, as well as NVIDIA NIM and NeMo microservices for quick image generation at scale. Physical AI Brings Acceleration to Supply Chain and Logistics AI’s impact extends far beyond the digital world. Physical AI-powered warehousing robots, for example, are helping maximize efficiency in retail supply chain operations. Four in five retail companies have reported that AI has helped reduce supply chain operational costs, with 25% reporting cost reductions of at least 10%. Technology providers Lyric, KoiReader Technologies and Exotec are tackling the challenges of integrating AI into complex warehouse environments. Lyric is using the NVIDIA cuOpt GPU-accelerated solver for warehouse network planning and route optimization, and is collaborating with NVIDIA to apply the technology to broader supply chain decision-making problems. KoiReader Technologies is tapping the NVIDIA Metropolis stack for its computer vision solutions within logistics, supply chain and manufacturing environments using the KoiVision Platform. And Exotec is using NVIDIA CUDA libraries and the NVIDIA JetPack software development kit for embedded robotic systems in warehouse and distribution centers. From real-time robotics orchestration to predictive maintenance, these solutions are delivering impact on uptime, throughput and cost savings for supply chain operations. Learn more by joining a follow-up discussion on digital twins and AI-powered creativity with Microsoft, Nestlé, Accenture and NVIDIA at Cannes Lions on Monday, June 16. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    Like
    Love
    Sad
    Wow
    Angry
    23
    0 Комментарии 0 Поделились
  • GameStop Buy 2 Get 1 Free Deal is Back, But There's a Catch

    On June 27, GameStop announced that its Buy 2 Get 1 Free offer on all pre-owned video games is returning as a Pro Exclusive promotion. Although the returning GameStop offer is valid on pre-owned copies of games like Mario Kart 8 Deluxe and Super Mario 3D All-Stars, access requires customers to upgrade their accounts to a Pro membership, currently priced at per year.
    #gamestop #buy #get #free #deal
    GameStop Buy 2 Get 1 Free Deal is Back, But There's a Catch
    On June 27, GameStop announced that its Buy 2 Get 1 Free offer on all pre-owned video games is returning as a Pro Exclusive promotion. Although the returning GameStop offer is valid on pre-owned copies of games like Mario Kart 8 Deluxe and Super Mario 3D All-Stars, access requires customers to upgrade their accounts to a Pro membership, currently priced at per year. #gamestop #buy #get #free #deal
    GAMERANT.COM
    GameStop Buy 2 Get 1 Free Deal is Back, But There's a Catch
    On June 27, GameStop announced that its Buy 2 Get 1 Free offer on all pre-owned video games is returning as a Pro Exclusive promotion. Although the returning GameStop offer is valid on pre-owned copies of games like Mario Kart 8 Deluxe and Super Mario 3D All-Stars, access requires customers to upgrade their accounts to a Pro membership, currently priced at $25 per year.
    0 Комментарии 0 Поделились
  • NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI

    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions.
    Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges.
    To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure.
    Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations.
    Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint.

    NVIDIA Omniverse Blueprint for Smart City AI 
    The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes:

    NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale.
    NVIDIA Cosmos to generate synthetic data at scale for post-training AI models.
    NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models.
    NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes.

    The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint.
    NVIDIA Partner Ecosystem Powers Smart Cities Worldwide
    The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own.
    SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning.
    This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management.
    Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption.

    The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second.
    Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events.
    To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second.

    Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance.
    Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases.
    The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems.

    Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins.
    Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%.

    Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance.
    Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities.
    Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents.
    Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    #nvidia #brings #physical #european #cities
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. #nvidia #brings #physical #european #cities
    BLOGS.NVIDIA.COM
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language models (VLMs) and large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization (VSS), helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. https://blogs.nvidia.com/wp-content/uploads/2025/06/01-Monaco-Akila.mp4 The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-K2K-Polermo-1600x900-1.mp4 Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. https://blogs.nvidia.com/wp-content/uploads/2025/06/03-Milestone.mp4 Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-Linker-Vision-1280x680-1.mp4 Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    Like
    Love
    Wow
    34
    0 Комментарии 0 Поделились
  • Lost Records developer Don't Nod is making layoffs. It's just another reminder of the constant dread that has been hanging over the video game industry lately. Not much else to say, really. Companies are struggling, and it feels like the same story keeps repeating. Layoffs happen, and everyone just moves on.

    #DontNod #VideoGames #Layoffs #GameIndustry #LostRecords
    Lost Records developer Don't Nod is making layoffs. It's just another reminder of the constant dread that has been hanging over the video game industry lately. Not much else to say, really. Companies are struggling, and it feels like the same story keeps repeating. Layoffs happen, and everyone just moves on. #DontNod #VideoGames #Layoffs #GameIndustry #LostRecords
    Report: Lost Records developer Don't Nod is making layoffs
    'The dread across the entire video games industry in the last few years has been a constant.'
    1 Комментарии 0 Поделились
  • So, it turns out that the role of censorship in supporting Israel is as clear as mud. After the recent video of the Israeli army spokesperson, one might wonder if the "censorship champions" are actually just the world's best PR team. Who knew that silencing the truth could be such a lucrative career path? But hey, at least they’re consistent—consistently dodging accountability, that is. It’s almost like they think we can’t handle the truth. Keep it up, guys; your creativity in twisting narratives is truly inspiring!

    #Censorship #Israel #MediaManipulation #Propaganda #TruthHurts
    So, it turns out that the role of censorship in supporting Israel is as clear as mud. After the recent video of the Israeli army spokesperson, one might wonder if the "censorship champions" are actually just the world's best PR team. Who knew that silencing the truth could be such a lucrative career path? But hey, at least they’re consistent—consistently dodging accountability, that is. It’s almost like they think we can’t handle the truth. Keep it up, guys; your creativity in twisting narratives is truly inspiring! #Censorship #Israel #MediaManipulation #Propaganda #TruthHurts
    ARABHARDWARE.NET
    دور الرقابة في دعم إسرائيل بعد فيديو المتحدثة الرسمية لجيش الاحتلال
    The post دور الرقابة في دعم إسرائيل بعد فيديو المتحدثة الرسمية لجيش الاحتلال appeared first on عرب هاردوير.
    1 Комментарии 0 Поделились
  • What a colossal disappointment! The Switch 2's first new GameCube game is… Super Mario Strikers? Seriously?! After all the anticipation for classics like Luigi’s Mansion or Super Mario Sunshine, we get a mediocre soccer game as part of the Switch Online + Expansion Pack library. This is not the nostalgia trip we signed up for! Nintendo, how low can you go? This is an insult to fans craving real innovation and quality. Instead of delivering something groundbreaking, you're recycling an old franchise that barely scratched the surface of fun. Where's the creativity? Where's the passion? It's time to wake up, Nintendo!

    #Nintendo #Switch2 #MarioStrikers #GamingDisappointment #VideoGames
    What a colossal disappointment! The Switch 2's first new GameCube game is… Super Mario Strikers? Seriously?! After all the anticipation for classics like Luigi’s Mansion or Super Mario Sunshine, we get a mediocre soccer game as part of the Switch Online + Expansion Pack library. This is not the nostalgia trip we signed up for! Nintendo, how low can you go? This is an insult to fans craving real innovation and quality. Instead of delivering something groundbreaking, you're recycling an old franchise that barely scratched the surface of fun. Where's the creativity? Where's the passion? It's time to wake up, Nintendo! #Nintendo #Switch2 #MarioStrikers #GamingDisappointment #VideoGames
    KOTAKU.COM
    The Switch 2's First New GameCube Game Is A Mario Strikers That's Actually Good
    Just under a month since it launched, the Switch 2 is getting its first new GameCube game as part of its Switch Online + Expansion Pack library. Is it Luigi’s Mansion? Super Mario Sunshine?? Fire Emblem: Path of Radiance??? No, it’s Super Mario Strik
    1 Комментарии 0 Поделились
  • Plug and Play: Build a G-Assist Plug-In Today

    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems.
    NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels.

    G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow.
    Below, find popular G-Assist plug-ins, hackathon details and tips to get started.
    Plug-In and Win
    Join the hackathon by registering and checking out the curated technical resources.
    G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation.
    For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins.
    To submit an entry, participants must provide a GitHub repository, including source code file, requirements.txt, manifest.json, config.json, a plug-in executable file and READme code.
    Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action.
    Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16.
    Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in.
    Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit.
    Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU, specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver.
    Plug-InExplore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows.

    Popular plug-ins include:

    Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay.
    Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay.
    IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device.
    Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists.
    Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more.

    Get G-Assist 
    Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff.
    the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session.
    Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities.
    Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process.
    NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #plug #play #build #gassist #plugin
    Plug and Play: Build a G-Assist Plug-In Today
    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems. NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels. G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow. Below, find popular G-Assist plug-ins, hackathon details and tips to get started. Plug-In and Win Join the hackathon by registering and checking out the curated technical resources. G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation. For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins. To submit an entry, participants must provide a GitHub repository, including source code file, requirements.txt, manifest.json, config.json, a plug-in executable file and READme code. Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action. Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16. Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in. Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit. Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU, specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver. Plug-InExplore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows. Popular plug-ins include: Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay. Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device. Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists. Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more. Get G-Assist  Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff. the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session. Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process. NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #plug #play #build #gassist #plugin
    BLOGS.NVIDIA.COM
    Plug and Play: Build a G-Assist Plug-In Today
    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems. NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels. G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow. Below, find popular G-Assist plug-ins, hackathon details and tips to get started. Plug-In and Win Join the hackathon by registering and checking out the curated technical resources. G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation. For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins. To submit an entry, participants must provide a GitHub repository, including source code file (plugin.py), requirements.txt, manifest.json, config.json (if applicable), a plug-in executable file and READme code. Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action. Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16. Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in. Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit. Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU (Intel Pentium G Series, Core i3, i5, i7 or higher; AMD FX, Ryzen 3, 5, 7, 9, Threadripper or higher), specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver. Plug-In(spiration) Explore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows. Popular plug-ins include: Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay. Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device. Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists. Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more. Get G-Assist(ance)  Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff. Save the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session. Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process. NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Wow
    Love
    Sad
    25
    0 Комментарии 0 Поделились
  • HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift

    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas.
    The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers, to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI.
    The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market.
    The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster.
    This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.
    These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows.
    HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October.
    In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption.
    The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center.
    To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis.
    HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity.
    Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments.

    Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay.
    Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page.
    #hpe #nvidia #debut #factory #stack
    HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift
    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas. The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers, to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI. The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market. The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster. This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows. HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October. In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption. The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center. To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis. HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity. Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments. Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay. Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page. #hpe #nvidia #debut #factory #stack
    BLOGS.NVIDIA.COM
    HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift
    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas. The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers (HPE ProLiant Compute DL380a Gen12), to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI. The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market. The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster. This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows. HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October. In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption. The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center. To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis. HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity. Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments. Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay. Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page.
    0 Комментарии 0 Поделились
Расширенные страницы