• NVIDIA CEO Drops the Blueprint for Europe’s AI Boom

    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it.
    “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris.
    From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future.

    A New Industrial Revolution
    At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing.
    “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance.
    At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware.
    There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers.
    Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue.
    NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth.
    Quantum Meets Classical
    Europe’s quantum ambitions just got a boost.
    The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems.
    Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction.
    “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.”
    Sovereign Models, Smarter Agents
    European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs.
    “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said.
    These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe.
    “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said.
    Huang explained how NVIDIA is helping countries across Europe build AI infrastructure.
    Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments.
    The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents.
    To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity.
    “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute.
    The Industrial Cloud Goes Live
    AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution.
    “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent.
    Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.”
    To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale.
    “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.”
    NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation.
    And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics.
    The Next Wave
    The next wave of AI has begun — and it’s exponential, Huang explained.
    “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.”
    This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said.
    To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.”
    Huang and Grek, as he explained how AI is driving advancements in robotics.
    These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence.
    “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.”
    With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe.
    Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    #nvidia #ceo #drops #blueprint #europes
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions. #nvidia #ceo #drops #blueprint #europes
    BLOGS.NVIDIA.COM
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    Like
    Love
    Sad
    23
    0 Comments 0 Shares
  • Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
    Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing.
    These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation.
    To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools.
    Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale.
    Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale.
    NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale.
    Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models.

    Foundations for Scalable, Realistic Simulation
    Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots.

    In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools.
    Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos.
    Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing.
    The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases.
    Driving the Future of AV Safety
    To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety.
    The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems.
    These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks.

    At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance.
    Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay:

    Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks.
    Get Plugged Into the World of OpenUSD
    Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote.
    Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14.
    Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute.
    Explore the Alliance for OpenUSD forum and the AOUSD website.
    Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    #into #omniverse #world #foundation #models
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X. #into #omniverse #world #foundation #models
    BLOGS.NVIDIA.COM
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehicles (AVs) across countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models (WFMs) — neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description (OpenUSD), a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    0 Comments 0 Shares
  • BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4

    By TREVOR HOGG
    Images courtesy of Prime Video.

    For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”

    When Splintersplits in two, the cloning effect was inspired by cellular mitosis.

    “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”
    —Stephan Fleet, VFX Supervisor

    A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”

    Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed.

    Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.”

    “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.”
    —Stephan Fleet, VFX Supervisor

    The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.”

    A building is replaced by a massive crowd attending a rally being held by Homelander.

    In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.”

    In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around.

    “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”
    —Stephan Fleet, VFX Supervisor

    Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”

    The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes.

    Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.”

    Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination.

    Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.”

    “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”
    —Stephan Fleet, VFX Supervisor

    Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution.

    A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.”

    Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4.

    When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    #bouncing #rubber #duckies #flying #sheep
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splintersplits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.” #bouncing #rubber #duckies #flying #sheep
    WWW.VFXVOICE.COM
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splinter (Rob Benedict) splits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith [Previs Director], who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be [from Season 3], so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a human [is] you tend to want to give it human gestures and eyebrows. Erik Kripke [Creator, Executive Producer, Showrunner, Director, Writer] said, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep was [acting in a certain way] in one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr. (Simon Pegg) develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelander (Anthony Starr) breaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchie (Tomer Capone) hallucinates as Kimiko Miyashiro (Karen Fukuhara) goes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splinter (Rob Benedict) splits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker (Valorie Curry). “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    0 Comments 0 Shares
  • At last, physicists at the University of Liège have cracked the code: water landscapes created with 3D printing! Because why enjoy a simple drink when you can have a miniature ocean on your table? Forget about the days of just swimming in water; now we can marvel at the aesthetic pleasure of tiny, printed spines dancing on the surface. Who knew physics could be so… artistic? Next up, they'll probably figure out how to print clouds into our living rooms. Get ready for some very confused houseplants.

    #3DPrinting #WaterLandscapes #PhysicsArt #UniversityOfLiege #InnovativeScience
    At last, physicists at the University of Liège have cracked the code: water landscapes created with 3D printing! Because why enjoy a simple drink when you can have a miniature ocean on your table? Forget about the days of just swimming in water; now we can marvel at the aesthetic pleasure of tiny, printed spines dancing on the surface. Who knew physics could be so… artistic? Next up, they'll probably figure out how to print clouds into our living rooms. Get ready for some very confused houseplants. #3DPrinting #WaterLandscapes #PhysicsArt #UniversityOfLiege #InnovativeScience
    Físicos de la Universidad de Lieja crean paisajes líquidos gracias a la impresión 3D
    ¿Y si pudiéramos convertir el agua en un paisaje? Físicos de la Universidad de Lieja, en Bélgica, en colaboración la Universidad Brown (EE.UU.), lo han logrado. A partir de espinas milimétricas impresas en 3D, consiguieron manipular la superficie del
    1 Comments 0 Shares
  • Exciting news for all Blender enthusiasts! Have you heard about ScatterFlow? This incredible add-on brings physics-based scattering to your projects, allowing you to dress environments quickly and effortlessly! Imagine spawning 3D assets that settle beautifully under the influence of gravity—it's like magic!

    No more tedious manual placements; now you can focus on unleashing your creativity and bringing your visions to life! Whether you're a beginner or a seasoned pro, ScatterFlow is here to elevate your Blender experience. Let's create stunning worlds together!

    #Blender #ScatterFlow #3DArt #CreativeCommunity #Inspiration
    ✨🌟 Exciting news for all Blender enthusiasts! 🎉 Have you heard about ScatterFlow? This incredible add-on brings physics-based scattering to your projects, allowing you to dress environments quickly and effortlessly! 🌍💫 Imagine spawning 3D assets that settle beautifully under the influence of gravity—it's like magic! 🪄✨ No more tedious manual placements; now you can focus on unleashing your creativity and bringing your visions to life! 🎨💖 Whether you're a beginner or a seasoned pro, ScatterFlow is here to elevate your Blender experience. Let's create stunning worlds together! 🚀🔥 #Blender #ScatterFlow #3DArt #CreativeCommunity #Inspiration
    ScatterFlow adds physics-based scattering to Blender
    Inexpensive add-on lets you dress environments quickly in Blender by spawning in 3D assets and letting them settle naturally under gravity.
    1 Comments 0 Shares
  • So, there’s this thing called the Franck-Hertz experiment. It’s one of those physics experiments that people rave about, but honestly, I don’t get why. It was done way back in 1914, and it’s supposed to explain how energy comes in these “packets” called “quanta.” Sounds fancy, but like, does it really change anything?

    They say this experiment marked the start of quantum physics, which I guess is important for some. It’s all about those little particles and how they behave. If you’re into that sort of thing, you might want to look into doing a DIY version of the Franck-Hertz experiment. Apparently, it’s not too hard and you can even do it at home. But let’s be real, who has the energy for that?

    You just set up a tube with some mercury vapor and run some voltage through it. Then you measure the current and see how it changes as you adjust the voltage. It’s all about those energy levels and how electrons bounce around. But, like, I don’t know how many people are actually excited to do this. Maybe if you’re a physics enthusiast, it’ll be fun for you.

    But if you’re like me and prefer to just scroll through your phone or binge-watch a show, then this sounds like a lot of work for not much payoff. I mean, who really wants to dive into the intricacies of quantum physics when there are so many other things to do—like anything else?

    So, if you’re curious about the Franck-Hertz experiment and want to try it yourself, go ahead. Just know that you might end up feeling a bit underwhelmed. Science can be cool, but sometimes it feels like a chore, especially when it’s all about tiny particles that you can’t even see.

    Anyway, that’s my take on it. If you’re still interested in quantum physics after this, good for you. I’ll just be over here, probably napping or scrolling through social media.

    #FranckHertz #QuantumPhysics #DIYScience #PhysicsExperiment #Boredom
    So, there’s this thing called the Franck-Hertz experiment. It’s one of those physics experiments that people rave about, but honestly, I don’t get why. It was done way back in 1914, and it’s supposed to explain how energy comes in these “packets” called “quanta.” Sounds fancy, but like, does it really change anything? They say this experiment marked the start of quantum physics, which I guess is important for some. It’s all about those little particles and how they behave. If you’re into that sort of thing, you might want to look into doing a DIY version of the Franck-Hertz experiment. Apparently, it’s not too hard and you can even do it at home. But let’s be real, who has the energy for that? You just set up a tube with some mercury vapor and run some voltage through it. Then you measure the current and see how it changes as you adjust the voltage. It’s all about those energy levels and how electrons bounce around. But, like, I don’t know how many people are actually excited to do this. Maybe if you’re a physics enthusiast, it’ll be fun for you. But if you’re like me and prefer to just scroll through your phone or binge-watch a show, then this sounds like a lot of work for not much payoff. I mean, who really wants to dive into the intricacies of quantum physics when there are so many other things to do—like anything else? So, if you’re curious about the Franck-Hertz experiment and want to try it yourself, go ahead. Just know that you might end up feeling a bit underwhelmed. Science can be cool, but sometimes it feels like a chore, especially when it’s all about tiny particles that you can’t even see. Anyway, that’s my take on it. If you’re still interested in quantum physics after this, good for you. I’ll just be over here, probably napping or scrolling through social media. #FranckHertz #QuantumPhysics #DIYScience #PhysicsExperiment #Boredom
    A DIY Version of the Franck-Hertz Experiment
    The Franck–Hertz experiment was a pioneering physics observation announced in 1914 which explained that energy came in “packets” which we call “quanta”, marking the beginning of quantum physics. Recently, [Markus …read m
    Like
    Love
    Wow
    Sad
    Angry
    536
    1 Comments 0 Shares
  • Delightfully irreverent Underdogs isn’t your parents’ nature docuseries

    show some love for the losers

    Delightfully irreverent Underdogs isn’t your parents’ nature docuseries

    Ryan Reynolds narrates NatGeo's new series highlighting nature's much less cool and majestic creatures

    Jennifer Ouellette



    Jun 15, 2025 3:11 pm

    |

    5

    The indestructible honey badger is just one of nature's "benchwarmers" featured in Underdogs

    Credit:

    National Geographic/Doug Parker

    The indestructible honey badger is just one of nature's "benchwarmers" featured in Underdogs

    Credit:

    National Geographic/Doug Parker

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Narrator Ryan Reynolds celebrates nature's outcasts in the new NatGeo docuseries Underdogs.

    Most of us have seen a nature documentary or twoat some point in our lives, so it's a familiar format: sweeping majestic footage of impressively regal animals accompanied by reverently high-toned narration. Underdogs, a new docuseries from National Geographic, takes a decidedly different and unconventional approach. Narrated by with hilarious irreverence by Ryan Reynolds, the five-part series highlights nature's less cool and majestic creatures: the outcasts and benchwarmers, more noteworthy for their "unconventional hygiene choices" and "unsavory courtship rituals." It's like The Suicide Squad or Thunderbolts*, except these creatures actually exist.
    Per the official premise, "Underdogs features a range of never-before-filmed scenes, including the first time a film crew has ever entered a special cave in New Zealand—a huge cavern that glows brighter than a bachelor pad under a black light thanks to the glowing butts of millions of mucus-coated grubs. All over the world, overlooked superstars like this are out there 24/7, giving it maximum effort and keeping the natural world in working order for all those showboating polar bears, sharks and gorillas." It's rated PG-13 thanks to the odd bit of scatalogical humor and shots of Nature Sexy Time
    Each of the five episodes is built around a specific genre. "Superheroes" highlights the surprising superpowers of the honey badger, pistol shrimp, and the invisible glass frog, among others, augmented with comic book graphics; "Sexy Beasts" focuses on bizarre mating habits and follows the format of a romantic advice column; "Terrible Parents" highlights nature's worst practices, following the outline of a parenting guide; "Total Grossout" is exactly what it sounds like; and "The Unusual Suspects" is a heist tale, documenting the supposed efforts of a macaque to put together the ultimate team of masters of deception and disguise.  Green Day even wrote and recorded a special theme song for the opening credits.
    Co-creators Mark Linfield and Vanessa Berlowitz of Wildstar Films are longtime producers of award-winning wildlife films, most notably Frozen Planet, Planet Earth and David Attenborough's Life of Mammals—you know, the kind of prestige nature documentaries that have become a mainstay for National Geographic and the BBC, among others. They're justly proud of that work, but this time around the duo wanted to try something different.

    Madagascar's aye-aye: "as if fear and panic had a baby and rolled it in dog hair"

    National Geographic/Eleanor Paish

    Madagascar's aye-aye: "as if fear and panic had a baby and rolled it in dog hair"

    National Geographic/Eleanor Paish

    An emerald jewel wasp emerges from a cockroach.

    National Geographic/Simon De Glanville

    An emerald jewel wasp emerges from a cockroach.

    National Geographic/Simon De Glanville

    A pack of African hunting dogs is no match for the honey badger's thick hide.

    National Geographic/Tom Walker

    A pack of African hunting dogs is no match for the honey badger's thick hide.

    National Geographic/Tom Walker

    An emerald jewel wasp emerges from a cockroach.

    National Geographic/Simon De Glanville

    A pack of African hunting dogs is no match for the honey badger's thick hide.

    National Geographic/Tom Walker

    A fireworm is hit by a cavitation bubble shot from the claw of a pistol shrimp defending its home.

    National Geographic/Hugh Miller

    As it grows and molts, the mad hatterpillar stacks old head casings on top of its head. Scientists think it is used as a decoy against would-be predators and parasites, and when needed, it can also be used as a weapon.

    National Geographic/Katherine Hannaford

    Worst parents ever? A young barnacle goose chick prepares t make the 800-foot jump from its nest to the ground.

    National Geographic

    An adult pearlfish reverses into a sea cucumber's butt to hide.

    National Geographic

    A vulture sticks its head inside an elephant carcass to eat.

    National Geographic

    A manatee releases flatulence while swimming to lose the buoyancy build up of gas inside its stomach, and descend down the water column.

    National Geographic/Karl Davies

    "There is a sense after awhile that you're playing the same animals to the same people, and the shows are starting to look the same and so is your audience," Linfield told Ars. "We thought, okay, how can we do something absolutely the opposite? We've gone through our careers collecting stories of these weird and crazy creatures that don't end up in the script because they're not big or sexy and they live under a rock. But they often have the best life histories and the craziest superpowers."
    Case in point: the velvet worm featured in the "Superheroes" episode, which creeps up on unsuspecting prey before squirting disgusting slime all over their food.Once Linfield and Berlowitz decided to focus on nature's underdogs and to take a more humorous approach, Ryan Reynolds became their top choice for a narrator—the anti-Richard Attenborough. As luck would have it, the pair shared an agent with the mega-star. So even though they thought there was no way Reynolds would agree to the project, they put together a sizzle reel, complete with a "fake Canadian Ryan Reynolds sound-alike" doing the narration. Reynolds was on set when he received the reel, and loved it so much he recoded his own narration for the footage and sent it back.
    "From that moment he was in," said Linfield, and Wildstar Films worked closely with Reynolds and his company to develop the final series. "We've never worked that way on a series before, a joint collaboration from day one," Berlowitz admitted. But it worked: the end result strikes the perfect balance between scientific revelation and accurate natural history, and an edgy comic tone.
    That tone is quintessential Reynolds, and while he did mostly follow the script, Linfield and Berlowitz admit there was also a fair amount of improvisation—not all of it PG-13.  "What we hadn't appreciated is that he's an incredible improv performer," said Berlowitz. "He can't help himself. He gets into character and starts riffing off. There are some takes that we definitely couldn't use, that potentially would fit a slightly more Hulu audience."  Some of the ad-libs made it into the final episodes, however—like Reynolds describing an Aye-Aye as "if fear and panic had a baby and rolled it in dog hair"—even though it meant going back and doing a bit of recutting to get the new lines to fit.

    Cinematographer Tom Beldam films a long-tailed macaque who stole his smart phone minutes later.

    National Geographic/Laura Pennafort

    Cinematographer Tom Beldam films a long-tailed macaque who stole his smart phone minutes later.

    National Geographic/Laura Pennafort

    The macaque agrees to trade ithe stolen phone for a piece of food.

    National Geographic

    The macaque agrees to trade ithe stolen phone for a piece of food.

    National Geographic

    A family of tortoise beetles defend themselves from a carnivorous ant by wafting baby poop in its direction.

    National Geographic

    A family of tortoise beetles defend themselves from a carnivorous ant by wafting baby poop in its direction.

    National Geographic

    The macaque agrees to trade ithe stolen phone for a piece of food.

    National Geographic

    A family of tortoise beetles defend themselves from a carnivorous ant by wafting baby poop in its direction.

    National Geographic

    A male hippo sprays his feces at another male who is threatening to take over his patch.

    National Geographic

    A male proboscis monkey flaunts his large nose. The noses of these males are used to amplify their calls in the vast forest.

    National Geographic

    Dream girl: A blood-soaked female hyena looks across the African savanna.

    National Geographic

    A male bowerbird presents one of the finest items in his collection to a female in his bower.

    National Geographic

    The male nursery web spider presents his nuptial gift to the female.

    National Geographic

    Cue the Barry White mood music: Two leopard slugs suspend themselves on a rope of mucus as they entwine their bodies to mate with one another.

    National Geographic

    Despite their years of collective experience, Linfield and Berlowitz were initially skeptical when the crew told them about the pearl fish, which hides from predators in a sea cucumber's butt. "It had never been filmed so we said, 'You're going to have to prove it to us,'" said Berlowitz. "They came back with this fantastic, hilarious sequence of a pearl fish reverse parking [in a sea cucumber's anus)."
    The film crew experienced a few heart-pounding moments, most notably while filming the cliffside nests of barnacle geese for the "Terrible Parents" episode. A melting glacier caused a watery avalanche while the crew was filming the geese, and they had to quickly grab a few shots and run to safety. Less dramatic: cinematographer Tom Beldam had his smartphone stolen by a long-tailed macaque mere minutes after he finished capturing the animal on film.
    If all goes well and Underdogs finds its target audience, we may even get a follow-up. "We are slightly plowing new territory but the science is as true as it's ever been and the stories are good. That aspect of the natural history is still there," said Linfield. "I think what we really hope for is that people who don't normally watch natural history will watch it. If people have as much fun watching it as we had making it, then the metrics should be good enough for another season."
    Verdict: Underdogs is positively addictive; I binged all five episodes in a single day.Underdogs premieres June 15, 2025, at 9 PM/8 PM Central on National Geographicand will be available for streaming on Disney+ and Hulu the following day.  You should watch it, if only to get that second season.

    Jennifer Ouellette
    Senior Writer

    Jennifer Ouellette
    Senior Writer

    Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

    5 Comments
    #delightfully #irreverent #underdogs #isnt #your
    Delightfully irreverent Underdogs isn’t your parents’ nature docuseries
    show some love for the losers Delightfully irreverent Underdogs isn’t your parents’ nature docuseries Ryan Reynolds narrates NatGeo's new series highlighting nature's much less cool and majestic creatures Jennifer Ouellette – Jun 15, 2025 3:11 pm | 5 The indestructible honey badger is just one of nature's "benchwarmers" featured in Underdogs Credit: National Geographic/Doug Parker The indestructible honey badger is just one of nature's "benchwarmers" featured in Underdogs Credit: National Geographic/Doug Parker Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Narrator Ryan Reynolds celebrates nature's outcasts in the new NatGeo docuseries Underdogs. Most of us have seen a nature documentary or twoat some point in our lives, so it's a familiar format: sweeping majestic footage of impressively regal animals accompanied by reverently high-toned narration. Underdogs, a new docuseries from National Geographic, takes a decidedly different and unconventional approach. Narrated by with hilarious irreverence by Ryan Reynolds, the five-part series highlights nature's less cool and majestic creatures: the outcasts and benchwarmers, more noteworthy for their "unconventional hygiene choices" and "unsavory courtship rituals." It's like The Suicide Squad or Thunderbolts*, except these creatures actually exist. Per the official premise, "Underdogs features a range of never-before-filmed scenes, including the first time a film crew has ever entered a special cave in New Zealand—a huge cavern that glows brighter than a bachelor pad under a black light thanks to the glowing butts of millions of mucus-coated grubs. All over the world, overlooked superstars like this are out there 24/7, giving it maximum effort and keeping the natural world in working order for all those showboating polar bears, sharks and gorillas." It's rated PG-13 thanks to the odd bit of scatalogical humor and shots of Nature Sexy Time Each of the five episodes is built around a specific genre. "Superheroes" highlights the surprising superpowers of the honey badger, pistol shrimp, and the invisible glass frog, among others, augmented with comic book graphics; "Sexy Beasts" focuses on bizarre mating habits and follows the format of a romantic advice column; "Terrible Parents" highlights nature's worst practices, following the outline of a parenting guide; "Total Grossout" is exactly what it sounds like; and "The Unusual Suspects" is a heist tale, documenting the supposed efforts of a macaque to put together the ultimate team of masters of deception and disguise.  Green Day even wrote and recorded a special theme song for the opening credits. Co-creators Mark Linfield and Vanessa Berlowitz of Wildstar Films are longtime producers of award-winning wildlife films, most notably Frozen Planet, Planet Earth and David Attenborough's Life of Mammals—you know, the kind of prestige nature documentaries that have become a mainstay for National Geographic and the BBC, among others. They're justly proud of that work, but this time around the duo wanted to try something different. Madagascar's aye-aye: "as if fear and panic had a baby and rolled it in dog hair" National Geographic/Eleanor Paish Madagascar's aye-aye: "as if fear and panic had a baby and rolled it in dog hair" National Geographic/Eleanor Paish An emerald jewel wasp emerges from a cockroach. National Geographic/Simon De Glanville An emerald jewel wasp emerges from a cockroach. National Geographic/Simon De Glanville A pack of African hunting dogs is no match for the honey badger's thick hide. National Geographic/Tom Walker A pack of African hunting dogs is no match for the honey badger's thick hide. National Geographic/Tom Walker An emerald jewel wasp emerges from a cockroach. National Geographic/Simon De Glanville A pack of African hunting dogs is no match for the honey badger's thick hide. National Geographic/Tom Walker A fireworm is hit by a cavitation bubble shot from the claw of a pistol shrimp defending its home. National Geographic/Hugh Miller As it grows and molts, the mad hatterpillar stacks old head casings on top of its head. Scientists think it is used as a decoy against would-be predators and parasites, and when needed, it can also be used as a weapon. National Geographic/Katherine Hannaford Worst parents ever? A young barnacle goose chick prepares t make the 800-foot jump from its nest to the ground. National Geographic An adult pearlfish reverses into a sea cucumber's butt to hide. National Geographic A vulture sticks its head inside an elephant carcass to eat. National Geographic A manatee releases flatulence while swimming to lose the buoyancy build up of gas inside its stomach, and descend down the water column. National Geographic/Karl Davies "There is a sense after awhile that you're playing the same animals to the same people, and the shows are starting to look the same and so is your audience," Linfield told Ars. "We thought, okay, how can we do something absolutely the opposite? We've gone through our careers collecting stories of these weird and crazy creatures that don't end up in the script because they're not big or sexy and they live under a rock. But they often have the best life histories and the craziest superpowers." Case in point: the velvet worm featured in the "Superheroes" episode, which creeps up on unsuspecting prey before squirting disgusting slime all over their food.Once Linfield and Berlowitz decided to focus on nature's underdogs and to take a more humorous approach, Ryan Reynolds became their top choice for a narrator—the anti-Richard Attenborough. As luck would have it, the pair shared an agent with the mega-star. So even though they thought there was no way Reynolds would agree to the project, they put together a sizzle reel, complete with a "fake Canadian Ryan Reynolds sound-alike" doing the narration. Reynolds was on set when he received the reel, and loved it so much he recoded his own narration for the footage and sent it back. "From that moment he was in," said Linfield, and Wildstar Films worked closely with Reynolds and his company to develop the final series. "We've never worked that way on a series before, a joint collaboration from day one," Berlowitz admitted. But it worked: the end result strikes the perfect balance between scientific revelation and accurate natural history, and an edgy comic tone. That tone is quintessential Reynolds, and while he did mostly follow the script, Linfield and Berlowitz admit there was also a fair amount of improvisation—not all of it PG-13.  "What we hadn't appreciated is that he's an incredible improv performer," said Berlowitz. "He can't help himself. He gets into character and starts riffing off. There are some takes that we definitely couldn't use, that potentially would fit a slightly more Hulu audience."  Some of the ad-libs made it into the final episodes, however—like Reynolds describing an Aye-Aye as "if fear and panic had a baby and rolled it in dog hair"—even though it meant going back and doing a bit of recutting to get the new lines to fit. Cinematographer Tom Beldam films a long-tailed macaque who stole his smart phone minutes later. National Geographic/Laura Pennafort Cinematographer Tom Beldam films a long-tailed macaque who stole his smart phone minutes later. National Geographic/Laura Pennafort The macaque agrees to trade ithe stolen phone for a piece of food. National Geographic The macaque agrees to trade ithe stolen phone for a piece of food. National Geographic A family of tortoise beetles defend themselves from a carnivorous ant by wafting baby poop in its direction. National Geographic A family of tortoise beetles defend themselves from a carnivorous ant by wafting baby poop in its direction. National Geographic The macaque agrees to trade ithe stolen phone for a piece of food. National Geographic A family of tortoise beetles defend themselves from a carnivorous ant by wafting baby poop in its direction. National Geographic A male hippo sprays his feces at another male who is threatening to take over his patch. National Geographic A male proboscis monkey flaunts his large nose. The noses of these males are used to amplify their calls in the vast forest. National Geographic Dream girl: A blood-soaked female hyena looks across the African savanna. National Geographic A male bowerbird presents one of the finest items in his collection to a female in his bower. National Geographic The male nursery web spider presents his nuptial gift to the female. National Geographic Cue the Barry White mood music: Two leopard slugs suspend themselves on a rope of mucus as they entwine their bodies to mate with one another. National Geographic Despite their years of collective experience, Linfield and Berlowitz were initially skeptical when the crew told them about the pearl fish, which hides from predators in a sea cucumber's butt. "It had never been filmed so we said, 'You're going to have to prove it to us,'" said Berlowitz. "They came back with this fantastic, hilarious sequence of a pearl fish reverse parking [in a sea cucumber's anus)." The film crew experienced a few heart-pounding moments, most notably while filming the cliffside nests of barnacle geese for the "Terrible Parents" episode. A melting glacier caused a watery avalanche while the crew was filming the geese, and they had to quickly grab a few shots and run to safety. Less dramatic: cinematographer Tom Beldam had his smartphone stolen by a long-tailed macaque mere minutes after he finished capturing the animal on film. If all goes well and Underdogs finds its target audience, we may even get a follow-up. "We are slightly plowing new territory but the science is as true as it's ever been and the stories are good. That aspect of the natural history is still there," said Linfield. "I think what we really hope for is that people who don't normally watch natural history will watch it. If people have as much fun watching it as we had making it, then the metrics should be good enough for another season." Verdict: Underdogs is positively addictive; I binged all five episodes in a single day.Underdogs premieres June 15, 2025, at 9 PM/8 PM Central on National Geographicand will be available for streaming on Disney+ and Hulu the following day.  You should watch it, if only to get that second season. Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 5 Comments #delightfully #irreverent #underdogs #isnt #your
    ARSTECHNICA.COM
    Delightfully irreverent Underdogs isn’t your parents’ nature docuseries
    show some love for the losers Delightfully irreverent Underdogs isn’t your parents’ nature docuseries Ryan Reynolds narrates NatGeo's new series highlighting nature's much less cool and majestic creatures Jennifer Ouellette – Jun 15, 2025 3:11 pm | 5 The indestructible honey badger is just one of nature's "benchwarmers" featured in Underdogs Credit: National Geographic/Doug Parker The indestructible honey badger is just one of nature's "benchwarmers" featured in Underdogs Credit: National Geographic/Doug Parker Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Narrator Ryan Reynolds celebrates nature's outcasts in the new NatGeo docuseries Underdogs. Most of us have seen a nature documentary or two (or three) at some point in our lives, so it's a familiar format: sweeping majestic footage of impressively regal animals accompanied by reverently high-toned narration (preferably with a tony British accent). Underdogs, a new docuseries from National Geographic, takes a decidedly different and unconventional approach. Narrated by with hilarious irreverence by Ryan Reynolds, the five-part series highlights nature's less cool and majestic creatures: the outcasts and benchwarmers, more noteworthy for their "unconventional hygiene choices" and "unsavory courtship rituals." It's like The Suicide Squad or Thunderbolts*, except these creatures actually exist. Per the official premise, "Underdogs features a range of never-before-filmed scenes, including the first time a film crew has ever entered a special cave in New Zealand—a huge cavern that glows brighter than a bachelor pad under a black light thanks to the glowing butts of millions of mucus-coated grubs. All over the world, overlooked superstars like this are out there 24/7, giving it maximum effort and keeping the natural world in working order for all those showboating polar bears, sharks and gorillas." It's rated PG-13 thanks to the odd bit of scatalogical humor and shots of Nature Sexy Time Each of the five episodes is built around a specific genre. "Superheroes" highlights the surprising superpowers of the honey badger, pistol shrimp, and the invisible glass frog, among others, augmented with comic book graphics; "Sexy Beasts" focuses on bizarre mating habits and follows the format of a romantic advice column; "Terrible Parents" highlights nature's worst practices, following the outline of a parenting guide; "Total Grossout" is exactly what it sounds like; and "The Unusual Suspects" is a heist tale, documenting the supposed efforts of a macaque to put together the ultimate team of masters of deception and disguise (an inside man, a decoy, a fall guy, etc.).  Green Day even wrote and recorded a special theme song for the opening credits. Co-creators Mark Linfield and Vanessa Berlowitz of Wildstar Films are longtime producers of award-winning wildlife films, most notably Frozen Planet, Planet Earth and David Attenborough's Life of Mammals—you know, the kind of prestige nature documentaries that have become a mainstay for National Geographic and the BBC, among others. They're justly proud of that work, but this time around the duo wanted to try something different. Madagascar's aye-aye: "as if fear and panic had a baby and rolled it in dog hair" National Geographic/Eleanor Paish Madagascar's aye-aye: "as if fear and panic had a baby and rolled it in dog hair" National Geographic/Eleanor Paish An emerald jewel wasp emerges from a cockroach. National Geographic/Simon De Glanville An emerald jewel wasp emerges from a cockroach. National Geographic/Simon De Glanville A pack of African hunting dogs is no match for the honey badger's thick hide. National Geographic/Tom Walker A pack of African hunting dogs is no match for the honey badger's thick hide. National Geographic/Tom Walker An emerald jewel wasp emerges from a cockroach. National Geographic/Simon De Glanville A pack of African hunting dogs is no match for the honey badger's thick hide. National Geographic/Tom Walker A fireworm is hit by a cavitation bubble shot from the claw of a pistol shrimp defending its home. National Geographic/Hugh Miller As it grows and molts, the mad hatterpillar stacks old head casings on top of its head. Scientists think it is used as a decoy against would-be predators and parasites, and when needed, it can also be used as a weapon. National Geographic/Katherine Hannaford Worst parents ever? A young barnacle goose chick prepares t make the 800-foot jump from its nest to the ground. National Geographic An adult pearlfish reverses into a sea cucumber's butt to hide. National Geographic A vulture sticks its head inside an elephant carcass to eat. National Geographic A manatee releases flatulence while swimming to lose the buoyancy build up of gas inside its stomach, and descend down the water column. National Geographic/Karl Davies "There is a sense after awhile that you're playing the same animals to the same people, and the shows are starting to look the same and so is your audience," Linfield told Ars. "We thought, okay, how can we do something absolutely the opposite? We've gone through our careers collecting stories of these weird and crazy creatures that don't end up in the script because they're not big or sexy and they live under a rock. But they often have the best life histories and the craziest superpowers." Case in point: the velvet worm featured in the "Superheroes" episode, which creeps up on unsuspecting prey before squirting disgusting slime all over their food. (It's a handy defense mechanism, too, against predators like the wolf spider.) Once Linfield and Berlowitz decided to focus on nature's underdogs and to take a more humorous approach, Ryan Reynolds became their top choice for a narrator—the anti-Richard Attenborough. As luck would have it, the pair shared an agent with the mega-star. So even though they thought there was no way Reynolds would agree to the project, they put together a sizzle reel, complete with a "fake Canadian Ryan Reynolds sound-alike" doing the narration. Reynolds was on set when he received the reel, and loved it so much he recoded his own narration for the footage and sent it back. "From that moment he was in," said Linfield, and Wildstar Films worked closely with Reynolds and his company to develop the final series. "We've never worked that way on a series before, a joint collaboration from day one," Berlowitz admitted. But it worked: the end result strikes the perfect balance between scientific revelation and accurate natural history, and an edgy comic tone. That tone is quintessential Reynolds, and while he did mostly follow the script (which his team helped write), Linfield and Berlowitz admit there was also a fair amount of improvisation—not all of it PG-13.  "What we hadn't appreciated is that he's an incredible improv performer," said Berlowitz. "He can't help himself. He gets into character and starts riffing off [the footage]. There are some takes that we definitely couldn't use, that potentially would fit a slightly more Hulu audience."  Some of the ad-libs made it into the final episodes, however—like Reynolds describing an Aye-Aye as "if fear and panic had a baby and rolled it in dog hair"—even though it meant going back and doing a bit of recutting to get the new lines to fit. Cinematographer Tom Beldam films a long-tailed macaque who stole his smart phone minutes later. National Geographic/Laura Pennafort Cinematographer Tom Beldam films a long-tailed macaque who stole his smart phone minutes later. National Geographic/Laura Pennafort The macaque agrees to trade ithe stolen phone for a piece of food. National Geographic The macaque agrees to trade ithe stolen phone for a piece of food. National Geographic A family of tortoise beetles defend themselves from a carnivorous ant by wafting baby poop in its direction. National Geographic A family of tortoise beetles defend themselves from a carnivorous ant by wafting baby poop in its direction. National Geographic The macaque agrees to trade ithe stolen phone for a piece of food. National Geographic A family of tortoise beetles defend themselves from a carnivorous ant by wafting baby poop in its direction. National Geographic A male hippo sprays his feces at another male who is threatening to take over his patch. National Geographic A male proboscis monkey flaunts his large nose. The noses of these males are used to amplify their calls in the vast forest. National Geographic Dream girl: A blood-soaked female hyena looks across the African savanna. National Geographic A male bowerbird presents one of the finest items in his collection to a female in his bower. National Geographic The male nursery web spider presents his nuptial gift to the female. National Geographic Cue the Barry White mood music: Two leopard slugs suspend themselves on a rope of mucus as they entwine their bodies to mate with one another. National Geographic Despite their years of collective experience, Linfield and Berlowitz were initially skeptical when the crew told them about the pearl fish, which hides from predators in a sea cucumber's butt (along with many other species). "It had never been filmed so we said, 'You're going to have to prove it to us,'" said Berlowitz. "They came back with this fantastic, hilarious sequence of a pearl fish reverse parking [in a sea cucumber's anus)." The film crew experienced a few heart-pounding moments, most notably while filming the cliffside nests of barnacle geese for the "Terrible Parents" episode. A melting glacier caused a watery avalanche while the crew was filming the geese, and they had to quickly grab a few shots and run to safety. Less dramatic: cinematographer Tom Beldam had his smartphone stolen by a long-tailed macaque mere minutes after he finished capturing the animal on film. If all goes well and Underdogs finds its target audience, we may even get a follow-up. "We are slightly plowing new territory but the science is as true as it's ever been and the stories are good. That aspect of the natural history is still there," said Linfield. "I think what we really hope for is that people who don't normally watch natural history will watch it. If people have as much fun watching it as we had making it, then the metrics should be good enough for another season." Verdict: Underdogs is positively addictive; I binged all five episodes in a single day. (For his part, Reynolds said in a statement that he was thrilled to "finally watch a project of ours with my children. Technically they saw Deadpool and Wolverine but I don't think they absorbed much while covering their eyes and ears and screaming for two hours.") Underdogs premieres June 15, 2025, at 9 PM/8 PM Central on National Geographic (simulcast on ABC) and will be available for streaming on Disney+ and Hulu the following day.  You should watch it, if only to get that second season. Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 5 Comments
    Like
    Love
    Wow
    Angry
    Sad
    487
    2 Comments 0 Shares
  • Fusion and AI: How private sector tech is powering progress at ITER

    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.  
    Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion. 
    Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion. 
    “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research. 
    Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams.
    A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on. 
    But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties.
    “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.” 
    The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue. 
    While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.” 
    Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.” 
    It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools. 

    Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in.
    Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said. 
    The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life. 
    And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser.
    “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.” 
    Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays. 
    Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery. 
    Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said. 
    It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun.
    As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.” 
    If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    #fusion #how #private #sector #tech
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired.  #fusion #how #private #sector #tech
    WWW.COMPUTERWEEKLY.COM
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence (AI), already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understanding (MoU) to explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2C [inter integrated circuit] protocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    Like
    Love
    Wow
    Sad
    Angry
    490
    2 Comments 0 Shares
  • Learn GameDev with Unity, Unreal, GameMaker, Blender and C# Humble Bundle

    Learn GameDev with Unity, Unreal, GameMaker, Blender and C# Humble Bundle / News / June 11, 2025 /

    The Learn GameDev with Unity, Godot, Unreal, GameMaker, Blender and C# Humble Bundle from Zenva is now available. Each game engine comes with 5 or more courses covering all aspects of game development. This bundle joins the No-Code No-Problem Develop bundle and the Big Bang Unreal Unity and Godot bundle already live on Humble.
    As with most Humble Bundles, this one is organized into tiers:
    1$ Tier
    Intro to Godot 4 Game DevelopmentIntro to the Game Development Industry
    Makes No Sense Tier
    Explore Audio for Godot 4 Games
    UV Mapping in Blender for Beginners
    UI/UX for Game Design
    25$ Tier
    Godot 4 Mini-ProjectsCreate a Micro Turn-Based RPG in Godot3D Action-Adventure Game in Godot – Unit 1 – Characters
    Intro to Visual Shaders in Godot 4
    Learn Game Optimization for Godot 4
    Coin Collector Game – Godot Mobile Projects
    Unreal Engine Mini-Projects
    Intro to Unreal Engine Game Development
    Create a Racing Game in Unreal Engine
    The Complete Unreal Engine C++ Course – Build an FPS
    Create a Turn-Based Mini RPG in Unreal Engine
    Build a 2.5D Farming RPG with Unreal Engine
    Intro to Game Development with Unity
    Unity Mini-Projects – C# Fundamentals
    Explore Game Optimization in Unity 6
    Intro to ECS for Unity 6
    Build an Arcade Kart Racing Game in Unity
    Construct a Mobile Physics Game in Unity
    Intro to Particle Systems for Unity Games
    Intro to Game Development with GameMaker
    Create a Complete 2D Action RPG in GameMaker
    Build a Real-Time Strategy Mini-Game with GameMaker
    Develop an Idle Clicker from Scratch in GameMaker
    Make a Mini Turn-Based RPG from Scratch in GameMaker
    The Comprehensive Introduction to C# Programming
    Build a Complete Mini 2D Game Engine with C#
    Learn 3D Modeling with Blender from Scratch
    Intro to Rigging Models in Blender
    MagicaVoxel for Beginners – Create Voxel Game Assets
    Prompt Engineering for Game Developers
    You can learn more about the Learn GameDev with Unity, Godot, Unreal, GameMaker, Blender and C# Humble Bundle in the video below. Using links on this page to purchase the bundle helps support GFS
    #learn #gamedev #with #unity #unreal
    Learn GameDev with Unity, Unreal, GameMaker, Blender and C# Humble Bundle
    Learn GameDev with Unity, Unreal, GameMaker, Blender and C# Humble Bundle / News / June 11, 2025 / The Learn GameDev with Unity, Godot, Unreal, GameMaker, Blender and C# Humble Bundle from Zenva is now available. Each game engine comes with 5 or more courses covering all aspects of game development. This bundle joins the No-Code No-Problem Develop bundle and the Big Bang Unreal Unity and Godot bundle already live on Humble. As with most Humble Bundles, this one is organized into tiers: 1$ Tier Intro to Godot 4 Game DevelopmentIntro to the Game Development Industry Makes No Sense Tier Explore Audio for Godot 4 Games UV Mapping in Blender for Beginners UI/UX for Game Design 25$ Tier Godot 4 Mini-ProjectsCreate a Micro Turn-Based RPG in Godot3D Action-Adventure Game in Godot – Unit 1 – Characters Intro to Visual Shaders in Godot 4 Learn Game Optimization for Godot 4 Coin Collector Game – Godot Mobile Projects Unreal Engine Mini-Projects Intro to Unreal Engine Game Development Create a Racing Game in Unreal Engine The Complete Unreal Engine C++ Course – Build an FPS Create a Turn-Based Mini RPG in Unreal Engine Build a 2.5D Farming RPG with Unreal Engine Intro to Game Development with Unity Unity Mini-Projects – C# Fundamentals Explore Game Optimization in Unity 6 Intro to ECS for Unity 6 Build an Arcade Kart Racing Game in Unity Construct a Mobile Physics Game in Unity Intro to Particle Systems for Unity Games Intro to Game Development with GameMaker Create a Complete 2D Action RPG in GameMaker Build a Real-Time Strategy Mini-Game with GameMaker Develop an Idle Clicker from Scratch in GameMaker Make a Mini Turn-Based RPG from Scratch in GameMaker The Comprehensive Introduction to C# Programming Build a Complete Mini 2D Game Engine with C# Learn 3D Modeling with Blender from Scratch Intro to Rigging Models in Blender MagicaVoxel for Beginners – Create Voxel Game Assets Prompt Engineering for Game Developers You can learn more about the Learn GameDev with Unity, Godot, Unreal, GameMaker, Blender and C# Humble Bundle in the video below. Using links on this page to purchase the bundle helps support GFS #learn #gamedev #with #unity #unreal
    GAMEFROMSCRATCH.COM
    Learn GameDev with Unity, Unreal, GameMaker, Blender and C# Humble Bundle
    Learn GameDev with Unity, Unreal, GameMaker, Blender and C# Humble Bundle / News / June 11, 2025 / The Learn GameDev with Unity, Godot, Unreal, GameMaker, Blender and C# Humble Bundle from Zenva is now available. Each game engine comes with 5 or more courses covering all aspects of game development. This bundle joins the No-Code No-Problem Develop bundle and the Big Bang Unreal Unity and Godot bundle already live on Humble. As with most Humble Bundles, this one is organized into tiers: 1$ Tier Intro to Godot 4 Game Development (2025 Edition) Intro to the Game Development Industry Makes No Sense Tier Explore Audio for Godot 4 Games UV Mapping in Blender for Beginners UI/UX for Game Design 25$ Tier Godot 4 Mini-Projects (2025 Edition) Create a Micro Turn-Based RPG in Godot (2025 Edition) 3D Action-Adventure Game in Godot – Unit 1 – Characters Intro to Visual Shaders in Godot 4 Learn Game Optimization for Godot 4 Coin Collector Game – Godot Mobile Projects Unreal Engine Mini-Projects Intro to Unreal Engine Game Development Create a Racing Game in Unreal Engine The Complete Unreal Engine C++ Course – Build an FPS Create a Turn-Based Mini RPG in Unreal Engine Build a 2.5D Farming RPG with Unreal Engine Intro to Game Development with Unity Unity Mini-Projects – C# Fundamentals Explore Game Optimization in Unity 6 Intro to ECS for Unity 6 Build an Arcade Kart Racing Game in Unity Construct a Mobile Physics Game in Unity Intro to Particle Systems for Unity Games Intro to Game Development with GameMaker Create a Complete 2D Action RPG in GameMaker Build a Real-Time Strategy Mini-Game with GameMaker Develop an Idle Clicker from Scratch in GameMaker Make a Mini Turn-Based RPG from Scratch in GameMaker The Comprehensive Introduction to C# Programming Build a Complete Mini 2D Game Engine with C# Learn 3D Modeling with Blender from Scratch Intro to Rigging Models in Blender MagicaVoxel for Beginners – Create Voxel Game Assets Prompt Engineering for Game Developers You can learn more about the Learn GameDev with Unity, Godot, Unreal, GameMaker, Blender and C# Humble Bundle in the video below. Using links on this page to purchase the bundle helps support GFS (and thanks so much if you do!)
    Like
    Love
    Wow
    Sad
    Angry
    488
    2 Comments 0 Shares
  • Dune: Awakening Helicopters Are 'Goomba Stomping' Players, Devs Are Working On A Fix

    In a crowded field full of online survival sims, Dune: Awakening is kicking up storm. The adaptation of Frank Herbert’s sci-fi novels lets players build bases, rid sand worms, and smash Ornithopters into one another. That last part has become a problem, and the developers are already looking into a fix. Suggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode

    Share SubtitlesOffEnglishview videoSuggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode

    Share SubtitlesOffEnglishDune’s Ornithopters are helicopters shaped like dragonflies. In Dune: Awakening, they’re one of the many vehicles players can build that serve as both a resource and an end-goal of sorts. They require a lot of equipment and resources to craft if you’re playing solo, which is why most of them belong to players working in groups. It turns out that they’re pretty indestructible too, making them lethal weapons for ramming enemy players with in PVP. Reddit user Bombe18 shared his run-in with Dune: Awakening’s man-made scourge in a recent clip that blew up on the subreddit showing him repeatedly being accosted by multiple Ornithopters. Shooting at them does nothing. They’re unscathed by constantly smashing into the ground on top of him. At one point, he tries to wall-jump off a ledge and stab one. “Yeah sorry about this,” wrote game director Joel Bylos. “We have people working on fixing the goomba stomping ASAP.”Players have been debating the role of Ornithopters in Dune: Awakening since its beta tests last year. On the one hand, they’re a lot of fun and a cool reward for players to build toward. On the other, they sort of trivialize trying to travel around the desert and survive, the two things the game is supposed to be about. They can also shoot missiles, completely dominating the ground game. Now that’s real desert power. In terms of stopping players from griefing one another with Ornithopters, there are a few different suggestions. Some players just want the vehicles not to be able to be used as weapons at all. Others want them isolated to specific PVP areas. Another solution is to make it easier to destroy them. “Seems like they should just make guns deal more damage to them,” wrote one player. “They’d think twice about doing this if their orni could get wrecked by gunfire.” Another wrote, “Make Deep Desert crashes do significant damage. Two crashes or something past a certain physics threshold should disable the vehicle.”However the developers decide to address the recent outbreak of Ornithopter “goomba stomping,” Dune: Awakening is having a great launch so far. Out earlier this week on PC, it’s nearing a 90 percent positive rating on Steam with almost 20,000 reviews. The concurrent player-count is very healthy, too, peaking at just under 150,000 heading into the weekend. Unfortunately, console players will have to wait a bit to build Ornithropters of their own. A PlayStation 5 and Xbox Series X/S release isn’t planned until sometime in 2026. .
    #dune #awakening #helicopters #are #039goomba
    Dune: Awakening Helicopters Are 'Goomba Stomping' Players, Devs Are Working On A Fix
    In a crowded field full of online survival sims, Dune: Awakening is kicking up storm. The adaptation of Frank Herbert’s sci-fi novels lets players build bases, rid sand worms, and smash Ornithopters into one another. That last part has become a problem, and the developers are already looking into a fix. Suggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode Share SubtitlesOffEnglishview videoSuggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode Share SubtitlesOffEnglishDune’s Ornithopters are helicopters shaped like dragonflies. In Dune: Awakening, they’re one of the many vehicles players can build that serve as both a resource and an end-goal of sorts. They require a lot of equipment and resources to craft if you’re playing solo, which is why most of them belong to players working in groups. It turns out that they’re pretty indestructible too, making them lethal weapons for ramming enemy players with in PVP. Reddit user Bombe18 shared his run-in with Dune: Awakening’s man-made scourge in a recent clip that blew up on the subreddit showing him repeatedly being accosted by multiple Ornithopters. Shooting at them does nothing. They’re unscathed by constantly smashing into the ground on top of him. At one point, he tries to wall-jump off a ledge and stab one. “Yeah sorry about this,” wrote game director Joel Bylos. “We have people working on fixing the goomba stomping ASAP.”Players have been debating the role of Ornithopters in Dune: Awakening since its beta tests last year. On the one hand, they’re a lot of fun and a cool reward for players to build toward. On the other, they sort of trivialize trying to travel around the desert and survive, the two things the game is supposed to be about. They can also shoot missiles, completely dominating the ground game. Now that’s real desert power. In terms of stopping players from griefing one another with Ornithopters, there are a few different suggestions. Some players just want the vehicles not to be able to be used as weapons at all. Others want them isolated to specific PVP areas. Another solution is to make it easier to destroy them. “Seems like they should just make guns deal more damage to them,” wrote one player. “They’d think twice about doing this if their orni could get wrecked by gunfire.” Another wrote, “Make Deep Desert crashes do significant damage. Two crashes or something past a certain physics threshold should disable the vehicle.”However the developers decide to address the recent outbreak of Ornithopter “goomba stomping,” Dune: Awakening is having a great launch so far. Out earlier this week on PC, it’s nearing a 90 percent positive rating on Steam with almost 20,000 reviews. The concurrent player-count is very healthy, too, peaking at just under 150,000 heading into the weekend. Unfortunately, console players will have to wait a bit to build Ornithropters of their own. A PlayStation 5 and Xbox Series X/S release isn’t planned until sometime in 2026. . #dune #awakening #helicopters #are #039goomba
    KOTAKU.COM
    Dune: Awakening Helicopters Are 'Goomba Stomping' Players, Devs Are Working On A Fix
    In a crowded field full of online survival sims, Dune: Awakening is kicking up storm. The adaptation of Frank Herbert’s sci-fi novels lets players build bases, rid sand worms, and smash Ornithopters into one another. That last part has become a problem, and the developers are already looking into a fix. Suggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode Share SubtitlesOffEnglishview videoSuggested Reading10 Minutes From The Last Of Us Part II’s Roguelike Mode Share SubtitlesOffEnglishDune’s Ornithopters are helicopters shaped like dragonflies. In Dune: Awakening, they’re one of the many vehicles players can build that serve as both a resource and an end-goal of sorts. They require a lot of equipment and resources to craft if you’re playing solo, which is why most of them belong to players working in groups. It turns out that they’re pretty indestructible too, making them lethal weapons for ramming enemy players with in PVP. Reddit user Bombe18 shared his run-in with Dune: Awakening’s man-made scourge in a recent clip that blew up on the subreddit showing him repeatedly being accosted by multiple Ornithopters. Shooting at them does nothing. They’re unscathed by constantly smashing into the ground on top of him. At one point, he tries to wall-jump off a ledge and stab one. “Yeah sorry about this,” wrote game director Joel Bylos. “We have people working on fixing the goomba stomping ASAP.”Players have been debating the role of Ornithopters in Dune: Awakening since its beta tests last year. On the one hand, they’re a lot of fun and a cool reward for players to build toward. On the other, they sort of trivialize trying to travel around the desert and survive, the two things the game is supposed to be about. They can also shoot missiles, completely dominating the ground game. Now that’s real desert power. In terms of stopping players from griefing one another with Ornithopters, there are a few different suggestions. Some players just want the vehicles not to be able to be used as weapons at all. Others want them isolated to specific PVP areas. Another solution is to make it easier to destroy them. “Seems like they should just make guns deal more damage to them,” wrote one player. “They’d think twice about doing this if their orni could get wrecked by gunfire.” Another wrote, “Make Deep Desert crashes do significant damage. Two crashes or something past a certain physics threshold should disable the vehicle.”However the developers decide to address the recent outbreak of Ornithopter “goomba stomping,” Dune: Awakening is having a great launch so far. Out earlier this week on PC, it’s nearing a 90 percent positive rating on Steam with almost 20,000 reviews. The concurrent player-count is very healthy, too, peaking at just under 150,000 heading into the weekend. Unfortunately, console players will have to wait a bit to build Ornithropters of their own. A PlayStation 5 and Xbox Series X/S release isn’t planned until sometime in 2026. .
    0 Comments 0 Shares
More Results