• Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid

    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand.
    Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation.
    At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics.
    Future use cases for AEON include:

    Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio.
    Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings.
    Part inspection, which includes checking parts for defects or ensuring adherence to specifications.
    Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners.

    “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.”

    Using NVIDIA’s Three Computers to Develop AEON 
    To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models.
    Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations.
    AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning.


    This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment.
    In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation.
    “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.”
    Data Comes to Life Through Reality Capture and Omniverse Integration 
    AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas.

    Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure.
    “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.”
    AEON’s Next Steps
    By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON.
    This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data.
    Watch the Hexagon LIVE keynote, explore presentations and read more about AEON.
    All imagery courtesy of Hexagon.
    #hexagon #taps #nvidia #robotics #software
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio. Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon. #hexagon #taps #nvidia #robotics #software
    BLOGS.NVIDIA.COM
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Reality (HxDR) platform powering Hexagon Reality Cloud Studio (RCS). Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. https://blogs.nvidia.com/wp-content/uploads/2025/06/Copy-of-robotics-hxgn-live-blog-1920x1080-1.mp4 This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon.
    Like
    Love
    Wow
    Sad
    Angry
    38
    0 Commenti 0 condivisioni
  • NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica

    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth.
    Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI.
    This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany.
    NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics.
    NVIDIA Technologies Boost Robotics Development 
    Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics.
    To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks.
    To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data.
    In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub.
    Image courtesy of Wandelbots.
    Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More 
    Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots.
    NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment.
    NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies.
    Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows.
    Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact.
    Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations.
    Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries.
    Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic.
    Image courtesy of Franka Robotics.
    Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support.
    Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies.
    SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario.
    Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation.

    Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications.
    Image courtesy of Vention.
    Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    #nvidia #partners #highlight #nextgeneration #robotics
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27.  #nvidia #partners #highlight #nextgeneration #robotics
    BLOGS.NVIDIA.COM
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a $200 billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3 (FR3) robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    Like
    Love
    Wow
    Sad
    Angry
    19
    0 Commenti 0 condivisioni
  • BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4

    By TREVOR HOGG
    Images courtesy of Prime Video.

    For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”

    When Splintersplits in two, the cloning effect was inspired by cellular mitosis.

    “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”
    —Stephan Fleet, VFX Supervisor

    A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”

    Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed.

    Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.”

    “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.”
    —Stephan Fleet, VFX Supervisor

    The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.”

    A building is replaced by a massive crowd attending a rally being held by Homelander.

    In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.”

    In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around.

    “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”
    —Stephan Fleet, VFX Supervisor

    Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”

    The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes.

    Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.”

    Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination.

    Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.”

    “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”
    —Stephan Fleet, VFX Supervisor

    Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution.

    A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.”

    Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4.

    When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    #bouncing #rubber #duckies #flying #sheep
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splintersplits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.” #bouncing #rubber #duckies #flying #sheep
    WWW.VFXVOICE.COM
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splinter (Rob Benedict) splits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith [Previs Director], who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be [from Season 3], so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a human [is] you tend to want to give it human gestures and eyebrows. Erik Kripke [Creator, Executive Producer, Showrunner, Director, Writer] said, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep was [acting in a certain way] in one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr. (Simon Pegg) develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelander (Anthony Starr) breaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchie (Tomer Capone) hallucinates as Kimiko Miyashiro (Karen Fukuhara) goes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splinter (Rob Benedict) splits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker (Valorie Curry). “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    0 Commenti 0 condivisioni
  • Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler

    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production.
    Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below.
    Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder.
    In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session.
    From Concept to Completion
    To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms.
    For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI.
    ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated.
    Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY.
    NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU.
    ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images.
    Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost.
    LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY.
    “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY 

    Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models.
    Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch.
    To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x.
    Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started.
    Photorealistic renders. Image courtesy of FITY.
    Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time.
    Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY.
    “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY

    Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #startup #uses #nvidia #rtxpowered #generative
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #startup #uses #nvidia #rtxpowered #generative
    BLOGS.NVIDIA.COM
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. Read more about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from $999. GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. Save the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptation (LoRA) models — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    0 Commenti 0 condivisioni
  • HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE

    By TREVOR HOGG

    Images courtesy of Warner Bros. Pictures.

    Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon.

    “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.”
    —Talia Finlayson, Creative Technologist, Disguise

    Interior and exterior environments had to be created, such as the shop owned by Steve.

    “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”

    Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.”

    A virtual exploration of Steve’s shop in Midport Village.

    Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.”

    “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”
    —Laura Bell, Creative Technologist, Disguise

    Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack.

    Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.”

    Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!”

    A virtual study and final still of the cast members standing outside of the Lava Chicken Shack.

    “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.”
    —Talia Finlayson, Creative Technologist, Disguise

    The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.”

    Virtually conceptualizing the layout of Midport Village.

    Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.”

    An example of the virtual and final version of the Woodland Mansion.

    “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.”
    —Laura Bell, Creative Technologist, Disguise

    Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.”

    Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment.

    Doing a virtual scale study of the Mountainside.

    Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.”

    Piglots cause mayhem during the Wingsuit Chase.

    Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods.

    “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    #how #disguise #built #out #virtual
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve. “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.” #how #disguise #built #out #virtual
    WWW.VFXVOICE.COM
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “[A]s the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve (Jack Black). “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’s (Jack Black) Lava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younis [VAD Art Director] adapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay George [VP Tech] and I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols [VAD Supervisor], Pat Younis, Jake Tuck [Unreal Artist] and Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    0 Commenti 0 condivisioni
  • Ah, California! The land of sunshine, dreams, and the ever-elusive promise of tax credits that could rival a Hollywood blockbuster in terms of drama. Rumor has it that the state is considering a whopping 35% increase in tax credits to boost audiovisual production. Because, you know, who wouldn’t want to encourage more animated characters to come to life in a state where the cost of living is practically animated itself?

    Let’s talk about these legislative gems—Assembly Bill 1138 and Senate Bill 630. Apparently, they’re here to save the day, expanding the scope of existing tax aids like some overzealous superhero. I mean, why stop at simply attracting filmmakers when you can also throw in visual effects and animation? It’s like giving a kid a whole candy store instead of a single lollipop. Who can say no to that?

    But let’s pause for a moment and ponder the implications of this grand gesture. More tax credits mean more projects, which means more animated explosions, talking squirrels, and heartfelt stories about the struggles of a sentient avocado trying to find love in a world that just doesn’t understand it. Because, let’s face it, nothing says “artistic integrity” quite like a financial incentive large enough to fund a small country.

    And what do we have to thank for this potential windfall? Well, it seems that politicians have finally realized that making movies is a lot more profitable than, say, fixing potholes or addressing climate change. Who knew? Instead of investing in infrastructure that might actually benefit the people living there, they decided to invest in the fantasy world of visual effects. Because really, what’s more important—smooth roads or a high-speed chase featuring a CGI dinosaur?

    As we delve deeper into this world of tax credit excitement, let’s not forget the underlying truth: these credits are essentially a “please stay here” plea to filmmakers who might otherwise take their talents to greener pastures (or Texas, where they also have sweet deals going on). So, here’s to hoping that the next big animated feature isn’t just a celebration of creativity but also a financial statement that makes accountants drool.

    So get ready, folks! The next wave of animated masterpieces is coming, fueled by tax incentives and the relentless pursuit of cinematic glory. Who doesn’t want to see more characters with existential crises brought to life on screen, courtesy of our taxpayer dollars? Bravo, California! You’ve truly outdone yourself. Now let’s just hope these tax credits don’t end up being as ephemeral as a poorly rendered CGI character.

    #CaliforniaTaxCredits #Animation #VFX #Hollywood #TaxIncentives
    Ah, California! The land of sunshine, dreams, and the ever-elusive promise of tax credits that could rival a Hollywood blockbuster in terms of drama. Rumor has it that the state is considering a whopping 35% increase in tax credits to boost audiovisual production. Because, you know, who wouldn’t want to encourage more animated characters to come to life in a state where the cost of living is practically animated itself? Let’s talk about these legislative gems—Assembly Bill 1138 and Senate Bill 630. Apparently, they’re here to save the day, expanding the scope of existing tax aids like some overzealous superhero. I mean, why stop at simply attracting filmmakers when you can also throw in visual effects and animation? It’s like giving a kid a whole candy store instead of a single lollipop. Who can say no to that? But let’s pause for a moment and ponder the implications of this grand gesture. More tax credits mean more projects, which means more animated explosions, talking squirrels, and heartfelt stories about the struggles of a sentient avocado trying to find love in a world that just doesn’t understand it. Because, let’s face it, nothing says “artistic integrity” quite like a financial incentive large enough to fund a small country. And what do we have to thank for this potential windfall? Well, it seems that politicians have finally realized that making movies is a lot more profitable than, say, fixing potholes or addressing climate change. Who knew? Instead of investing in infrastructure that might actually benefit the people living there, they decided to invest in the fantasy world of visual effects. Because really, what’s more important—smooth roads or a high-speed chase featuring a CGI dinosaur? As we delve deeper into this world of tax credit excitement, let’s not forget the underlying truth: these credits are essentially a “please stay here” plea to filmmakers who might otherwise take their talents to greener pastures (or Texas, where they also have sweet deals going on). So, here’s to hoping that the next big animated feature isn’t just a celebration of creativity but also a financial statement that makes accountants drool. So get ready, folks! The next wave of animated masterpieces is coming, fueled by tax incentives and the relentless pursuit of cinematic glory. Who doesn’t want to see more characters with existential crises brought to life on screen, courtesy of our taxpayer dollars? Bravo, California! You’ve truly outdone yourself. Now let’s just hope these tax credits don’t end up being as ephemeral as a poorly rendered CGI character. #CaliforniaTaxCredits #Animation #VFX #Hollywood #TaxIncentives
    Bientôt 35% de crédits d’impôts en Californie ? Impact à prévoir sur l’animation et les VFX
    La Californie pourrait augmenter ses crédits d’impôt pour favoriser la production audiovisuelle. Une évolution qui aurait aussi un impact sur les effets visuels et l’animation.Deux projets législatifs (Assembly Bill 1138 & Senate Bill
    Like
    Love
    Wow
    Angry
    Sad
    608
    1 Commenti 0 condivisioni
  • Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour

    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899.

    Posted By Joelle Daniels | On 16th, Jun. 2025

    While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With:

    Elden Ring: Nightreign
    Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More
    FBC: Firebreak
    Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More
    Death Stranding 2: On the Beach
    Publisher:Sony Developer:Kojima Productions Platforms:PS5View More
    Amazing Articles You Might Want To Check Out!

    Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ...
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo...
    Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo...
    The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f...
    The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb...
    Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More
    #asus #rog #xbox #ally #start
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899. Posted By Joelle Daniels | On 16th, Jun. 2025 While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With: Elden Ring: Nightreign Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More FBC: Firebreak Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More Death Stranding 2: On the Beach Publisher:Sony Developer:Kojima Productions Platforms:PS5View More Amazing Articles You Might Want To Check Out! Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ... Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo... Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo... The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f... The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb... Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More #asus #rog #xbox #ally #start
    GAMINGBOLT.COM
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899. Posted By Joelle Daniels | On 16th, Jun. 2025 While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With: Elden Ring: Nightreign Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More FBC: Firebreak Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More Death Stranding 2: On the Beach Publisher:Sony Developer:Kojima Productions Platforms:PS5View More Amazing Articles You Might Want To Check Out! Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ... Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo... Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo... The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f... The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb... Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More
    Like
    Love
    Wow
    Sad
    Angry
    600
    2 Commenti 0 condivisioni
  • The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source

    Key Takeaways

    Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices.
    The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it.
    A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation.

    Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software.
    Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall.
    The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month.
    Why the Danish Ministry of Digitalization Switched to Open-Source Software
    The three main reasons Denmark is moving away from Microsoft are costs, politics, and security.
    In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider. 
    The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023.
    It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft. 

    Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely.
    Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark. 
    In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory.
    If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing M each for his inauguration in January.
    Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves
    Denmark is only one of the growing number of European Unioncountries taking measures to become more digitally independent.
    Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values.

    Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers.
    Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region. 
    Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied.
    Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied.
    Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead
    Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking.
    It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products.
    Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.
     Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later

    As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy.
    With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility.
    Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines.
    Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech. 
    He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom.
    That fascination with tech didn’t just stick. It evolved into a full-blown calling.
    After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career.
    He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy.
    His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers.
    At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap.
    Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual.
    As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting.
    From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it.

    View all articles by Cedric Solidon

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #word #out #danish #ministry #drops
    The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source
    Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software. Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall. The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month. Why the Danish Ministry of Digitalization Switched to Open-Source Software The three main reasons Denmark is moving away from Microsoft are costs, politics, and security. In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider.  The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023. It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft.  Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely. Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark.  In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory. If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing M each for his inauguration in January. Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves Denmark is only one of the growing number of European Unioncountries taking measures to become more digitally independent. Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values. Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers. Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region.  Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied. Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied. Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking. It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products. Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.  Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #word #out #danish #ministry #drops
    TECHREPORT.COM
    The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source
    Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software. Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall. The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month. Why the Danish Ministry of Digitalization Switched to Open-Source Software The three main reasons Denmark is moving away from Microsoft are costs, politics, and security. In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider.  The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023. It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft.  Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely. Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark.  In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory. If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing $1M each for his inauguration in January. Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves Denmark is only one of the growing number of European Union (EU) countries taking measures to become more digitally independent. Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values. Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers. Earlier this month, the chief prosecutor of the International Criminal Court (ICC), Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region.  Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied. Earlier this month, the chief prosecutor of the International Criminal Court (ICC), Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied. Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking. It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products. Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.  Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    Like
    Love
    Wow
    Sad
    Angry
    526
    2 Commenti 0 condivisioni
  • Inside the Palazzo Durini Caproni di Taleido, Where the Past and Present Clash Harmoniously

    The 17th-century frescoes and antique mirrors should immediately tip visitors off: This showroom has something it needs to say. Palazzo Durini Caproni di Taliedo is a historic building in Milan, designed and built in the mid-1600s by Baroque architect Francesco Maria Richini. Among many other monumental works and churches, he also designed Milan’s Palazzo di Brera, which currently includes the Pinacoteca di Brera museum. The Palazzo Durini Caproni di Taliedo was commissioned by the heir to the Durinis, a wealthy merchant family.Today the palazzo is furniture showroom as palimpsest. Since 2021, Edra has exhibited collaborations with supremely contemporary designers, including the Campana brothers, Jacopo Foggini, and Francesco Binfaré, amid the restored Baroque grandeur.Courtesy Edra.Palazzo Durini in the 1920s, when the famed Italian aircraft designer and aeronautical engineer Giovanni Battista Caproni used it as an office.Walking through the rooms, one might imagine the visitors who could have lounged on an Edra “On the Rocks” sofa at one time or another in the history of this place: Giovanni Battista Caproni, the Italian count and aeronautical engineer who lived and worked in the building for more than 40 years? Soccer sensation Ronaldo, who caused a near riot when he visited the palazzo during its Inter Football Club era, when the sports association’s offices were located here? Or could it be iconic designer Gio Ponti, who is said to have drawn that gilded Art Deco bathroom with green terrazzo floors in the back?One palazzo, so many lives. Top Image: Palazzo Durini now, in its Edra showroom era. The frescoes may be 17th-century, but the furniture is the 2021 A’mare collection by Jacopo Foggini.This story originally appeared in the Summer 2025 issue of Elle Decor. SUBSCRIBEStellene VolandesEditor In ChiefEditor-in-Chief Stellene Volandes is a jewelry expert, and the author of Jeweler: Masters and Mavericks of Modern Design.
    #inside #palazzo #durini #caproni #taleido
    Inside the Palazzo Durini Caproni di Taleido, Where the Past and Present Clash Harmoniously
    The 17th-century frescoes and antique mirrors should immediately tip visitors off: This showroom has something it needs to say. Palazzo Durini Caproni di Taliedo is a historic building in Milan, designed and built in the mid-1600s by Baroque architect Francesco Maria Richini. Among many other monumental works and churches, he also designed Milan’s Palazzo di Brera, which currently includes the Pinacoteca di Brera museum. The Palazzo Durini Caproni di Taliedo was commissioned by the heir to the Durinis, a wealthy merchant family.Today the palazzo is furniture showroom as palimpsest. Since 2021, Edra has exhibited collaborations with supremely contemporary designers, including the Campana brothers, Jacopo Foggini, and Francesco Binfaré, amid the restored Baroque grandeur.Courtesy Edra.Palazzo Durini in the 1920s, when the famed Italian aircraft designer and aeronautical engineer Giovanni Battista Caproni used it as an office.Walking through the rooms, one might imagine the visitors who could have lounged on an Edra “On the Rocks” sofa at one time or another in the history of this place: Giovanni Battista Caproni, the Italian count and aeronautical engineer who lived and worked in the building for more than 40 years? Soccer sensation Ronaldo, who caused a near riot when he visited the palazzo during its Inter Football Club era, when the sports association’s offices were located here? Or could it be iconic designer Gio Ponti, who is said to have drawn that gilded Art Deco bathroom with green terrazzo floors in the back?One palazzo, so many lives. ◾Top Image: Palazzo Durini now, in its Edra showroom era. The frescoes may be 17th-century, but the furniture is the 2021 A’mare collection by Jacopo Foggini.This story originally appeared in the Summer 2025 issue of Elle Decor. SUBSCRIBEStellene VolandesEditor In ChiefEditor-in-Chief Stellene Volandes is a jewelry expert, and the author of Jeweler: Masters and Mavericks of Modern Design. #inside #palazzo #durini #caproni #taleido
    WWW.ELLEDECOR.COM
    Inside the Palazzo Durini Caproni di Taleido, Where the Past and Present Clash Harmoniously
    The 17th-century frescoes and antique mirrors should immediately tip visitors off: This showroom has something it needs to say. Palazzo Durini Caproni di Taliedo is a historic building in Milan, designed and built in the mid-1600s by Baroque architect Francesco Maria Richini. Among many other monumental works and churches, he also designed Milan’s Palazzo di Brera, which currently includes the Pinacoteca di Brera museum. The Palazzo Durini Caproni di Taliedo was commissioned by the heir to the Durinis, a wealthy merchant family.Today the palazzo is furniture showroom as palimpsest. Since 2021, Edra has exhibited collaborations with supremely contemporary designers, including the Campana brothers, Jacopo Foggini, and Francesco Binfaré, amid the restored Baroque grandeur.Courtesy Edra.Palazzo Durini in the 1920s, when the famed Italian aircraft designer and aeronautical engineer Giovanni Battista Caproni used it as an office.Walking through the rooms, one might imagine the visitors who could have lounged on an Edra “On the Rocks” sofa at one time or another in the history of this place: Giovanni Battista Caproni, the Italian count and aeronautical engineer who lived and worked in the building for more than 40 years? Soccer sensation Ronaldo, who caused a near riot when he visited the palazzo during its Inter Football Club era, when the sports association’s offices were located here? Or could it be iconic designer Gio Ponti, who is said to have drawn that gilded Art Deco bathroom with green terrazzo floors in the back?One palazzo, so many lives. ◾Top Image: Palazzo Durini now, in its Edra showroom era. The frescoes may be 17th-century, but the furniture is the 2021 A’mare collection by Jacopo Foggini.This story originally appeared in the Summer 2025 issue of Elle Decor. SUBSCRIBEStellene VolandesEditor In ChiefEditor-in-Chief Stellene Volandes is a jewelry expert, and the author of Jeweler: Masters and Mavericks of Modern Design (Rizzoli).
    Like
    Love
    Wow
    Sad
    Angry
    449
    0 Commenti 0 condivisioni
  • F5: Leta Sobierajski Talks Giant Pandas, Sculptural Clothing + More

    When Leta Sobierajski enrolled in college, she already knew what she was meant to do, and she didn’t settle for anything less. “When I went to school for graphic design, I really didn’t have a backup plan – it was this, or nothing,” she says. “My work is a constantly evolving practice, and from the beginning, I have always convinced myself that if I put in the time and experimentation, I would grow and evolve.”
    After graduation, Sobierajski took on a range of projects, which included animation, print, and branding elements. She collaborated with corporate clients, but realized that she wouldn’t feel comfortable following anyone else’s rules in a 9-to-5 environment.
    Leta Sobierajskiand Wade Jeffree\\\ Photo: Matt Dutile
    Sobierajski eventually decided to team up with fellow artist and kindred spirit Wade Jeffree. In 2016 they launched their Brooklyn-based studio, Wade and Leta. The duo, who share a taste for quirky aesthetics, produces sculpture, installations, or anything else they can dream up. Never static in thinking or method, they are constantly searching for another medium to try that will complement their shared vision of the moment.
    The pair is currently interested in permanency, and they want to utilize more metal, a strong material that will stand the test of time. Small architectural pieces are also on tap, and on a grander scale, they’d like to focus on a park or communal area that everyone can enjoy.
    With so many ideas swirling around, Sobierajski will record a concept in at least three different ways so that she’s sure to unearth it at a later date. “In some ways, I like to think I’m impeccably organized, as I have countless spreadsheets tracking our work, our lives, and our well-being,” she explains. “The reality is that I am great at over-complicating situations with my intensified list-making and note-taking. The only thing to do is to trust the process.”
    Today, Leta Sobierajski joins us for Friday Five!
    Photo: Melitta Baumeister and Michał Plata
    1. Melitta Baumeister and Michał Plata
    The work of Melitta Baumeister and Michał Plata has been a constant inspiration to me for their innovative, artful, and architectural silhouettes. By a practice of draping and arduous pattern-making, the garments that they develop season after season feel like they could be designed for existence in another universe. I’m a person who likes to dress up for anything when I’m not in the studio, and every time I opt to wear one of their looks, I feel like I can take on the world. The best part about their pieces is that they’re extremely functional, so whether I need to hop on a bicycle or show up at an opening, I’m still able to make a statement – these garments even have the ability to strike up conversations on their own.
    Photo: Wade and Leta
    2. Pandas!
    I was recently in Chengdu to launch a new project and we took half the day to visit the Chengdu Research Base of Giant Pandas and I am a new panda convert. Yes, they’re docile and cute, but their lifestyles are utterly chill and deeply enviable for us adults with responsibilities. Giant pandas primarily eat bamboo and can consume 20-40 kilograms per day. When they’re not doing that, they’re sleeping. When we visited, many could be seen reclining on their backs, feasting on some of the finest bamboo they could select within arm’s reach. While not necessarily playful in appearance, they do seem quite cheeky in their agendas and will do as little as they can to make the most of their meals. It felt like I was watching a mirrored image of myself on a Sunday afternoon while trying to make the most of my last hours of the weekend.
    Photo: Courtesy of Aoiro
    3. Aoiro
    I’m not really a candle personbut I love the luxurious subtlety of a fragrant space. It’s an intangible feeling that really can only be experienced in the present. Some of the best people to create these fragrances, in my opinion, are Shizuko and Manuel, the masterminds behind Aoiro, a Japanese and Austrian duo who have developed a keen sense for embodying the fragrances of some of the most intriguing and captivating olfactory atmospheres – earthy forest floors with crackling pine needles, blue cypress tickling the moon in an indigo sky, and rainfall on a spirited Japanese island. Despite living in an urban city, Aoiro’s olfactory design is capable of transporting me to the deepest forests of misty Yakushima island.
    Photo: Wade and Leta
    4. Takuro Kuwata
    A few months ago, I saw the work of Japanese ceramicist Takuro Kuwata at an exhibition at Salon94 and have been having trouble getting it out of my head. Kuwata’s work exemplifies someone who has worked with a medium so much to completely use the medium as a medium – if that makes sense. His ability to manipulate clay and glaze and use it to create gravity-defying effects within the kiln are exceptionally mysterious to me and feel like they could only be accomplished with years and years of experimentation with the material. I’m equally impressed seeing how he’s grown his work with scale, juxtaposing it with familiar iconography like the fuzzy peach, but sculpting it from materials like bronze.
    Photo: Wade and Leta
    5. The Site of Reversible Destiny, a park built by artists Arakawa and Gins, in Yoro Japan
    The park is a testament to their career as writers, architects, and their idea of reversible destiny, which in its most extreme form, eliminates death. For all that are willing to listen, Arakawa and Gins’ Reversible Destiny mentality aims to make our lives a little more youthful by encouraging us to reevaluate our relationship with architecture and our surroundings. The intention of “reversible destiny” is not to prolong death, postpone it, grow older alongside it, but to entirely not acknowledge and surpass it. Wadeand I have spent the last ten years traveling to as many of their remaining sites as possible to further understand this notion of creating spaces to extend our lives and question how conventional living spaces can become detrimental to our longevity.
     
    Works by Wade and Leta:
    Photo: Wade and Leta and Matt Alexander
    Now You See Me is a large-scale installation in the heart of Shoreditch, London, that explores the relationship between positive and negative space through bold color, geometry, and light. Simple, familiar shapes are embedded within monolithic forms, creating a layered visual experience that shifts throughout the day. As sunlight passes through the structures, shadows and silhouettes stretch and connect, forming dynamic compositions on the surrounding concrete.
    Photo: Wade and Leta and John Wylie
    Paint Your Own Path is series of five towering sculptures, ranging from 10 to 15 feet tall, invites viewers to explore balance, tension, and perspective through bold color and form. Inspired by the delicate, often precarious act of stacking objects, the sculptures appear as if they might topple – yet each one holds steady, challenging perceptions of stability. Created in partnership with the Corolla Cross, the installation transforms its environment into a pop-colored landscape.
    Photo: Millenia Walk and Outer Edit, Eurthe Studio
    Monument to Movement is a 14-meter-tall kinetic sculpture that celebrates the spirit of the holiday season through rhythm, motion, and color. Rising skyward in layered compositions, the work symbolizes collective joy, renewal, and the shared energy of celebrations that span cultures and traditions. Powered by motors and constructed from metal beams and cardboard forms, the sculpture continuously shifts, inviting viewers to reflect on the passage of time and the cycles that connect us all.
    Photo: Wade and Leta and Erika Hara, Piotr Maslanka, and Jeremy Renault
    Falling Into Place is a vibrant rooftop installation at Ginza Six that explores themes of alignment, adaptability, and perspective. Six colorful structures – each with a void like a missing puzzle piece – serve as spaces for reflection, inviting visitors to consider their place within a greater whole. Rather than focusing on absence, the design transforms emptiness into opportunity, encouraging people to embrace spontaneity and the unfolding nature of life. Playful yet contemplative, the work emphasizes that only through connection and participation can the full picture come into view.
    Photo: Wade and Leta and Erika Hara, Piotr Maslanka, and Jeremy Renault
    Photo: Wade and Leta
    Stop, Listen, Look is a 7-meter-tall interactive artwork atop IFS Chengdu that captures the vibrant rhythm of the city through movement, sound, and form. Blending motorized and wind-powered elements with seesaws and sound modulation, it invites people of all ages to engage, play, and reflect. Inspired by Chengdu’s balance of tradition and modernity, the piece incorporates circular motifs from local symbolism alongside bold, geometric forms to create a dialogue between past and present. With light, motion, and community at its core, the work invites visitors to connect with the city – and each other – through shared interaction.

    The Cloud is a permanent sculptural kiosk in Burlington, Vermont’s historic City Hall Park, created in collaboration with Brooklyn-based Studio RENZ+OEI. Designed to reinterpret the ephemeral nature of clouds through architecture, it blends art, air, and imagination into a light, fluid structure that defies traditional rigidity. Originally born from a creative exchange between longtime friends and collaborators, the design challenges expectations of permanence by embodying movement and openness. Now home to a local food vendor, The Cloud brings a playful, uplifting presence to the park, inviting reflection and interaction rain or shine..
    #leta #sobierajski #talks #giant #pandas
    F5: Leta Sobierajski Talks Giant Pandas, Sculptural Clothing + More
    When Leta Sobierajski enrolled in college, she already knew what she was meant to do, and she didn’t settle for anything less. “When I went to school for graphic design, I really didn’t have a backup plan – it was this, or nothing,” she says. “My work is a constantly evolving practice, and from the beginning, I have always convinced myself that if I put in the time and experimentation, I would grow and evolve.” After graduation, Sobierajski took on a range of projects, which included animation, print, and branding elements. She collaborated with corporate clients, but realized that she wouldn’t feel comfortable following anyone else’s rules in a 9-to-5 environment. Leta Sobierajskiand Wade Jeffree\\\ Photo: Matt Dutile Sobierajski eventually decided to team up with fellow artist and kindred spirit Wade Jeffree. In 2016 they launched their Brooklyn-based studio, Wade and Leta. The duo, who share a taste for quirky aesthetics, produces sculpture, installations, or anything else they can dream up. Never static in thinking or method, they are constantly searching for another medium to try that will complement their shared vision of the moment. The pair is currently interested in permanency, and they want to utilize more metal, a strong material that will stand the test of time. Small architectural pieces are also on tap, and on a grander scale, they’d like to focus on a park or communal area that everyone can enjoy. With so many ideas swirling around, Sobierajski will record a concept in at least three different ways so that she’s sure to unearth it at a later date. “In some ways, I like to think I’m impeccably organized, as I have countless spreadsheets tracking our work, our lives, and our well-being,” she explains. “The reality is that I am great at over-complicating situations with my intensified list-making and note-taking. The only thing to do is to trust the process.” Today, Leta Sobierajski joins us for Friday Five! Photo: Melitta Baumeister and Michał Plata 1. Melitta Baumeister and Michał Plata The work of Melitta Baumeister and Michał Plata has been a constant inspiration to me for their innovative, artful, and architectural silhouettes. By a practice of draping and arduous pattern-making, the garments that they develop season after season feel like they could be designed for existence in another universe. I’m a person who likes to dress up for anything when I’m not in the studio, and every time I opt to wear one of their looks, I feel like I can take on the world. The best part about their pieces is that they’re extremely functional, so whether I need to hop on a bicycle or show up at an opening, I’m still able to make a statement – these garments even have the ability to strike up conversations on their own. Photo: Wade and Leta 2. Pandas! I was recently in Chengdu to launch a new project and we took half the day to visit the Chengdu Research Base of Giant Pandas and I am a new panda convert. Yes, they’re docile and cute, but their lifestyles are utterly chill and deeply enviable for us adults with responsibilities. Giant pandas primarily eat bamboo and can consume 20-40 kilograms per day. When they’re not doing that, they’re sleeping. When we visited, many could be seen reclining on their backs, feasting on some of the finest bamboo they could select within arm’s reach. While not necessarily playful in appearance, they do seem quite cheeky in their agendas and will do as little as they can to make the most of their meals. It felt like I was watching a mirrored image of myself on a Sunday afternoon while trying to make the most of my last hours of the weekend. Photo: Courtesy of Aoiro 3. Aoiro I’m not really a candle personbut I love the luxurious subtlety of a fragrant space. It’s an intangible feeling that really can only be experienced in the present. Some of the best people to create these fragrances, in my opinion, are Shizuko and Manuel, the masterminds behind Aoiro, a Japanese and Austrian duo who have developed a keen sense for embodying the fragrances of some of the most intriguing and captivating olfactory atmospheres – earthy forest floors with crackling pine needles, blue cypress tickling the moon in an indigo sky, and rainfall on a spirited Japanese island. Despite living in an urban city, Aoiro’s olfactory design is capable of transporting me to the deepest forests of misty Yakushima island. Photo: Wade and Leta 4. Takuro Kuwata A few months ago, I saw the work of Japanese ceramicist Takuro Kuwata at an exhibition at Salon94 and have been having trouble getting it out of my head. Kuwata’s work exemplifies someone who has worked with a medium so much to completely use the medium as a medium – if that makes sense. His ability to manipulate clay and glaze and use it to create gravity-defying effects within the kiln are exceptionally mysterious to me and feel like they could only be accomplished with years and years of experimentation with the material. I’m equally impressed seeing how he’s grown his work with scale, juxtaposing it with familiar iconography like the fuzzy peach, but sculpting it from materials like bronze. Photo: Wade and Leta 5. The Site of Reversible Destiny, a park built by artists Arakawa and Gins, in Yoro Japan The park is a testament to their career as writers, architects, and their idea of reversible destiny, which in its most extreme form, eliminates death. For all that are willing to listen, Arakawa and Gins’ Reversible Destiny mentality aims to make our lives a little more youthful by encouraging us to reevaluate our relationship with architecture and our surroundings. The intention of “reversible destiny” is not to prolong death, postpone it, grow older alongside it, but to entirely not acknowledge and surpass it. Wadeand I have spent the last ten years traveling to as many of their remaining sites as possible to further understand this notion of creating spaces to extend our lives and question how conventional living spaces can become detrimental to our longevity.   Works by Wade and Leta: Photo: Wade and Leta and Matt Alexander Now You See Me is a large-scale installation in the heart of Shoreditch, London, that explores the relationship between positive and negative space through bold color, geometry, and light. Simple, familiar shapes are embedded within monolithic forms, creating a layered visual experience that shifts throughout the day. As sunlight passes through the structures, shadows and silhouettes stretch and connect, forming dynamic compositions on the surrounding concrete. Photo: Wade and Leta and John Wylie Paint Your Own Path is series of five towering sculptures, ranging from 10 to 15 feet tall, invites viewers to explore balance, tension, and perspective through bold color and form. Inspired by the delicate, often precarious act of stacking objects, the sculptures appear as if they might topple – yet each one holds steady, challenging perceptions of stability. Created in partnership with the Corolla Cross, the installation transforms its environment into a pop-colored landscape. Photo: Millenia Walk and Outer Edit, Eurthe Studio Monument to Movement is a 14-meter-tall kinetic sculpture that celebrates the spirit of the holiday season through rhythm, motion, and color. Rising skyward in layered compositions, the work symbolizes collective joy, renewal, and the shared energy of celebrations that span cultures and traditions. Powered by motors and constructed from metal beams and cardboard forms, the sculpture continuously shifts, inviting viewers to reflect on the passage of time and the cycles that connect us all. Photo: Wade and Leta and Erika Hara, Piotr Maslanka, and Jeremy Renault Falling Into Place is a vibrant rooftop installation at Ginza Six that explores themes of alignment, adaptability, and perspective. Six colorful structures – each with a void like a missing puzzle piece – serve as spaces for reflection, inviting visitors to consider their place within a greater whole. Rather than focusing on absence, the design transforms emptiness into opportunity, encouraging people to embrace spontaneity and the unfolding nature of life. Playful yet contemplative, the work emphasizes that only through connection and participation can the full picture come into view. Photo: Wade and Leta and Erika Hara, Piotr Maslanka, and Jeremy Renault Photo: Wade and Leta Stop, Listen, Look is a 7-meter-tall interactive artwork atop IFS Chengdu that captures the vibrant rhythm of the city through movement, sound, and form. Blending motorized and wind-powered elements with seesaws and sound modulation, it invites people of all ages to engage, play, and reflect. Inspired by Chengdu’s balance of tradition and modernity, the piece incorporates circular motifs from local symbolism alongside bold, geometric forms to create a dialogue between past and present. With light, motion, and community at its core, the work invites visitors to connect with the city – and each other – through shared interaction. The Cloud is a permanent sculptural kiosk in Burlington, Vermont’s historic City Hall Park, created in collaboration with Brooklyn-based Studio RENZ+OEI. Designed to reinterpret the ephemeral nature of clouds through architecture, it blends art, air, and imagination into a light, fluid structure that defies traditional rigidity. Originally born from a creative exchange between longtime friends and collaborators, the design challenges expectations of permanence by embodying movement and openness. Now home to a local food vendor, The Cloud brings a playful, uplifting presence to the park, inviting reflection and interaction rain or shine.. #leta #sobierajski #talks #giant #pandas
    DESIGN-MILK.COM
    F5: Leta Sobierajski Talks Giant Pandas, Sculptural Clothing + More
    When Leta Sobierajski enrolled in college, she already knew what she was meant to do, and she didn’t settle for anything less. “When I went to school for graphic design, I really didn’t have a backup plan – it was this, or nothing,” she says. “My work is a constantly evolving practice, and from the beginning, I have always convinced myself that if I put in the time and experimentation, I would grow and evolve.” After graduation, Sobierajski took on a range of projects, which included animation, print, and branding elements. She collaborated with corporate clients, but realized that she wouldn’t feel comfortable following anyone else’s rules in a 9-to-5 environment. Leta Sobierajski (standing) and Wade Jeffree (on ladder) \\\ Photo: Matt Dutile Sobierajski eventually decided to team up with fellow artist and kindred spirit Wade Jeffree. In 2016 they launched their Brooklyn-based studio, Wade and Leta. The duo, who share a taste for quirky aesthetics, produces sculpture, installations, or anything else they can dream up. Never static in thinking or method, they are constantly searching for another medium to try that will complement their shared vision of the moment. The pair is currently interested in permanency, and they want to utilize more metal, a strong material that will stand the test of time. Small architectural pieces are also on tap, and on a grander scale, they’d like to focus on a park or communal area that everyone can enjoy. With so many ideas swirling around, Sobierajski will record a concept in at least three different ways so that she’s sure to unearth it at a later date. “In some ways, I like to think I’m impeccably organized, as I have countless spreadsheets tracking our work, our lives, and our well-being,” she explains. “The reality is that I am great at over-complicating situations with my intensified list-making and note-taking. The only thing to do is to trust the process.” Today, Leta Sobierajski joins us for Friday Five! Photo: Melitta Baumeister and Michał Plata 1. Melitta Baumeister and Michał Plata The work of Melitta Baumeister and Michał Plata has been a constant inspiration to me for their innovative, artful, and architectural silhouettes. By a practice of draping and arduous pattern-making, the garments that they develop season after season feel like they could be designed for existence in another universe. I’m a person who likes to dress up for anything when I’m not in the studio, and every time I opt to wear one of their looks, I feel like I can take on the world. The best part about their pieces is that they’re extremely functional, so whether I need to hop on a bicycle or show up at an opening, I’m still able to make a statement – these garments even have the ability to strike up conversations on their own. Photo: Wade and Leta 2. Pandas! I was recently in Chengdu to launch a new project and we took half the day to visit the Chengdu Research Base of Giant Pandas and I am a new panda convert. Yes, they’re docile and cute, but their lifestyles are utterly chill and deeply enviable for us adults with responsibilities. Giant pandas primarily eat bamboo and can consume 20-40 kilograms per day. When they’re not doing that, they’re sleeping. When we visited, many could be seen reclining on their backs, feasting on some of the finest bamboo they could select within arm’s reach. While not necessarily playful in appearance, they do seem quite cheeky in their agendas and will do as little as they can to make the most of their meals. It felt like I was watching a mirrored image of myself on a Sunday afternoon while trying to make the most of my last hours of the weekend. Photo: Courtesy of Aoiro 3. Aoiro I’m not really a candle person (I forget to light it, and then I forget it’s lit, and then I panic when it’s been lit for too long) but I love the luxurious subtlety of a fragrant space. It’s an intangible feeling that really can only be experienced in the present. Some of the best people to create these fragrances, in my opinion, are Shizuko and Manuel, the masterminds behind Aoiro, a Japanese and Austrian duo who have developed a keen sense for embodying the fragrances of some of the most intriguing and captivating olfactory atmospheres – earthy forest floors with crackling pine needles, blue cypress tickling the moon in an indigo sky, and rainfall on a spirited Japanese island. Despite living in an urban city, Aoiro’s olfactory design is capable of transporting me to the deepest forests of misty Yakushima island. Photo: Wade and Leta 4. Takuro Kuwata A few months ago, I saw the work of Japanese ceramicist Takuro Kuwata at an exhibition at Salon94 and have been having trouble getting it out of my head. Kuwata’s work exemplifies someone who has worked with a medium so much to completely use the medium as a medium – if that makes sense. His ability to manipulate clay and glaze and use it to create gravity-defying effects within the kiln are exceptionally mysterious to me and feel like they could only be accomplished with years and years of experimentation with the material. I’m equally impressed seeing how he’s grown his work with scale, juxtaposing it with familiar iconography like the fuzzy peach, but sculpting it from materials like bronze. Photo: Wade and Leta 5. The Site of Reversible Destiny, a park built by artists Arakawa and Gins, in Yoro Japan The park is a testament to their career as writers, architects, and their idea of reversible destiny, which in its most extreme form, eliminates death. For all that are willing to listen, Arakawa and Gins’ Reversible Destiny mentality aims to make our lives a little more youthful by encouraging us to reevaluate our relationship with architecture and our surroundings. The intention of “reversible destiny” is not to prolong death, postpone it, grow older alongside it, but to entirely not acknowledge and surpass it. Wade (my partner) and I have spent the last ten years traveling to as many of their remaining sites as possible to further understand this notion of creating spaces to extend our lives and question how conventional living spaces can become detrimental to our longevity.   Works by Wade and Leta: Photo: Wade and Leta and Matt Alexander Now You See Me is a large-scale installation in the heart of Shoreditch, London, that explores the relationship between positive and negative space through bold color, geometry, and light. Simple, familiar shapes are embedded within monolithic forms, creating a layered visual experience that shifts throughout the day. As sunlight passes through the structures, shadows and silhouettes stretch and connect, forming dynamic compositions on the surrounding concrete. Photo: Wade and Leta and John Wylie Paint Your Own Path is series of five towering sculptures, ranging from 10 to 15 feet tall, invites viewers to explore balance, tension, and perspective through bold color and form. Inspired by the delicate, often precarious act of stacking objects, the sculptures appear as if they might topple – yet each one holds steady, challenging perceptions of stability. Created in partnership with the Corolla Cross, the installation transforms its environment into a pop-colored landscape. Photo: Millenia Walk and Outer Edit, Eurthe Studio Monument to Movement is a 14-meter-tall kinetic sculpture that celebrates the spirit of the holiday season through rhythm, motion, and color. Rising skyward in layered compositions, the work symbolizes collective joy, renewal, and the shared energy of celebrations that span cultures and traditions. Powered by motors and constructed from metal beams and cardboard forms, the sculpture continuously shifts, inviting viewers to reflect on the passage of time and the cycles that connect us all. Photo: Wade and Leta and Erika Hara, Piotr Maslanka, and Jeremy Renault Falling Into Place is a vibrant rooftop installation at Ginza Six that explores themes of alignment, adaptability, and perspective. Six colorful structures – each with a void like a missing puzzle piece – serve as spaces for reflection, inviting visitors to consider their place within a greater whole. Rather than focusing on absence, the design transforms emptiness into opportunity, encouraging people to embrace spontaneity and the unfolding nature of life. Playful yet contemplative, the work emphasizes that only through connection and participation can the full picture come into view. Photo: Wade and Leta and Erika Hara, Piotr Maslanka, and Jeremy Renault Photo: Wade and Leta Stop, Listen, Look is a 7-meter-tall interactive artwork atop IFS Chengdu that captures the vibrant rhythm of the city through movement, sound, and form. Blending motorized and wind-powered elements with seesaws and sound modulation, it invites people of all ages to engage, play, and reflect. Inspired by Chengdu’s balance of tradition and modernity, the piece incorporates circular motifs from local symbolism alongside bold, geometric forms to create a dialogue between past and present. With light, motion, and community at its core, the work invites visitors to connect with the city – and each other – through shared interaction. The Cloud is a permanent sculptural kiosk in Burlington, Vermont’s historic City Hall Park, created in collaboration with Brooklyn-based Studio RENZ+OEI. Designed to reinterpret the ephemeral nature of clouds through architecture, it blends art, air, and imagination into a light, fluid structure that defies traditional rigidity. Originally born from a creative exchange between longtime friends and collaborators, the design challenges expectations of permanence by embodying movement and openness. Now home to a local food vendor, The Cloud brings a playful, uplifting presence to the park, inviting reflection and interaction rain or shine..
    Like
    Love
    Wow
    Sad
    Angry
    502
    0 Commenti 0 condivisioni
Pagine in Evidenza