• European Robot Makers Adopt NVIDIA Isaac, Omniverse and Halos to Develop Safe, Physical AI-Driven Robot Fleets

    In the face of growing labor shortages and need for sustainability, European manufacturers are racing to reinvent their processes to become software-defined and AI-driven.
    To achieve this, robot developers and industrial digitalization solution providers are working with NVIDIA to build safe, AI-driven robots and industrial technologies to drive modern, sustainable manufacturing.
    At NVIDIA GTC Paris at VivaTech, Europe’s leading robotics companies including Agile Robots, Extend Robotics, Humanoid, idealworks, Neura Robotics, SICK, Universal Robots, Vorwerk and Wandelbots are showcasing their latest AI-driven robots and automation breakthroughs, all accelerated by NVIDIA technologies. In addition, NVIDIA is releasing new models and tools to support the entire robotics ecosystem.
    NVIDIA Releases Tools for Accelerating Robot Development and Safety
    NVIDIA Isaac GR00T N1.5, an open foundation model for humanoid robot reasoning and skills, is now available for download on Hugging Face. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. The NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 open-source robotics simulation and learning frameworks, optimized for NVIDIA RTX PRO 6000 workstations, are available on GitHub for developer preview.
    In addition, NVIDIA announced that NVIDIA Halos — a full-stack, comprehensive safety system that unifies hardware architecture, AI models, software, tools and services — now expands to robotics, promoting safety across the entire development lifecycle of AI-driven robots.
    The NVIDIA Halos AI Systems Inspection Lab has earned accreditation from the ANSI National Accreditation Boardto perform inspections across functional safety for robotics, in addition to automotive vehicles.
    “NVIDIA’s latest evaluation with ANAB verifies the demonstration of competence and compliance with internationally recognized standards, helping ensure that developers of autonomous machines — from automotive to robotics — can meet the highest benchmarks for functional safety,” said R. Douglas Leonard Jr., executive director of ANAB.
    Arcbest, Advantech, Bluewhite, Boston Dynamics, FORT, Inxpect, KION, NexCobot — a NEXCOM company, and Synapticon are among the first robotics companies to join the Halos Inspection Lab, ensuring their products meet NVIDIA safety and cybersecurity requirements.
    To support robotics leaders in strengthening safety across the entire development lifecycle of AI-driven robots, Halos will now provide:

    Safety extension packages for the NVIDIA IGX platform, enabling manufacturers to easily program safety functions into their robots, supported by TÜV Rheinland’s inspection of NVIDIA IGX.
    A robotic safety platform, which includes IGX and NVIDIA Holoscan Sensor Bridge for a unified approach to designing sensor-to-compute architecture with built-in AI safety.
    An outside-in safety AI inspector — an AI-powered agent for monitoring robot operations, helping improve worker safety.

    Europe’s Robotics Ecosystem Builds on NVIDIA’s Three Computers
    Europe’s leading robotics developers and solution providers are integrating the NVIDIA Isaac robotics platform to train, simulate and deploy robots across different embodiments.
    Agile Robots is post-training the GR00T N1 model in Isaac Lab to train its dual-arm manipulator robots, which run on NVIDIA Jetson hardware, to execute a variety of tasks in industrial environments.
    Meanwhile, idealworks has adopted the Mega NVIDIA Omniverse Blueprint for robotic fleet simulation to extend the blueprint’s capabilities to humanoids. Building on the VDA 5050 framework, idealworks contributes to the development of guidance that supports tasks uniquely enabled by humanoid robots, such as picking, moving and placing objects.
    Neura Robotics is integrating NVIDIA Isaac to further enhance its robot development workflows. The company is using GR00T-Mimic to post-train the Isaac GR00T N1 robot foundation model for its service robot MiPA. Neura is also collaborating with SAP and NVIDIA to integrate SAP’s Joule agents with its robots, using the Mega NVIDIA Omniverse Blueprint to simulate and refine robot behavior in complex, realistic operational scenarios before deployment.
    Vorwerk is using NVIDIA technologies to power its AI-driven collaborative robots. The company is post-training GR00T N1 models in Isaac Lab with its custom synthetic data pipeline, which is built on Isaac GR00T-Mimic and powered by the NVIDIA Omniverse platform. The enhanced models are then deployed on NVIDIA Jetson AGX, Jetson Orin or Jetson Thor modules for advanced, real-time home robotics.
    Humanoid is using NVIDIA’s full robotics stack, including Isaac Sim and Isaac Lab, to cut its prototyping time down by six weeks. The company is training its vision language action models on NVIDIA DGX B200 systems to boost the cognitive abilities of its robots, allowing them to operate autonomously in complex environments using Jetson Thor onboard computing.
    Universal Robots is introducing UR15, its fastest collaborative robot yet, to the European market. Using UR’s AI Accelerator — developed on NVIDIA Isaac’s CUDA-accelerated libraries and AI models, as well as NVIDIA Jetson AGX Orin — manufacturers can build AI applications to embed intelligence into the company’s new cobots.
    Wandelbots is showcasing its NOVA Operating System, now integrated with Omniverse, to simulate, validate and optimize robotic behaviors virtually before deploying them to physical robots. Wandelbots also announced a collaboration with EY and EDAG to offer manufacturers a scalable automation platform on Omniverse that speeds up the transition from proof of concept to full-scale deployment.
    Extend Robotics is using the Isaac GR00T platform to enable customers to control and train robots for industrial tasks like visual inspection and handling radioactive materials. The company’s Advanced Mechanics Assistance System lets users collect demonstration data and generate diverse synthetic datasets with NVIDIA GR00T-Mimic and GR00T-Gen to train the GR00T N1 foundation model.
    SICK is enhancing its autonomous perception solutions by integrating new certified sensor models — as well as 2D and 3D lidars, safety scanners and cameras — into NVIDIA Isaac Sim. This enables engineers to virtually design, test and validate machines using SICK’s sensing models within Omniverse, supporting processes spanning product development to large-scale robotic fleet management.
    Toyota Material Handling Europe is working with SoftServe to simulate its autonomous mobile robots working alongside human workers, using the Mega NVIDIA Omniverse Blueprint. Toyota Material Handling Europe is testing and simulating a multitude of traffic scenarios — allowing the company to refine its AI algorithms before real-world deployment.
    NVIDIA’s partner ecosystem is enabling European industries to tap into intelligent, AI-powered robotics. By harnessing advanced simulation, digital twins and generative AI, manufacturers are rapidly developing and deploying safe, adaptable robot fleets that address labor shortages, boost sustainability and drive operational efficiency.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    See notice regarding software product information.
    #european #robot #makers #adopt #nvidia
    European Robot Makers Adopt NVIDIA Isaac, Omniverse and Halos to Develop Safe, Physical AI-Driven Robot Fleets
    In the face of growing labor shortages and need for sustainability, European manufacturers are racing to reinvent their processes to become software-defined and AI-driven. To achieve this, robot developers and industrial digitalization solution providers are working with NVIDIA to build safe, AI-driven robots and industrial technologies to drive modern, sustainable manufacturing. At NVIDIA GTC Paris at VivaTech, Europe’s leading robotics companies including Agile Robots, Extend Robotics, Humanoid, idealworks, Neura Robotics, SICK, Universal Robots, Vorwerk and Wandelbots are showcasing their latest AI-driven robots and automation breakthroughs, all accelerated by NVIDIA technologies. In addition, NVIDIA is releasing new models and tools to support the entire robotics ecosystem. NVIDIA Releases Tools for Accelerating Robot Development and Safety NVIDIA Isaac GR00T N1.5, an open foundation model for humanoid robot reasoning and skills, is now available for download on Hugging Face. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. The NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 open-source robotics simulation and learning frameworks, optimized for NVIDIA RTX PRO 6000 workstations, are available on GitHub for developer preview. In addition, NVIDIA announced that NVIDIA Halos — a full-stack, comprehensive safety system that unifies hardware architecture, AI models, software, tools and services — now expands to robotics, promoting safety across the entire development lifecycle of AI-driven robots. The NVIDIA Halos AI Systems Inspection Lab has earned accreditation from the ANSI National Accreditation Boardto perform inspections across functional safety for robotics, in addition to automotive vehicles. “NVIDIA’s latest evaluation with ANAB verifies the demonstration of competence and compliance with internationally recognized standards, helping ensure that developers of autonomous machines — from automotive to robotics — can meet the highest benchmarks for functional safety,” said R. Douglas Leonard Jr., executive director of ANAB. Arcbest, Advantech, Bluewhite, Boston Dynamics, FORT, Inxpect, KION, NexCobot — a NEXCOM company, and Synapticon are among the first robotics companies to join the Halos Inspection Lab, ensuring their products meet NVIDIA safety and cybersecurity requirements. To support robotics leaders in strengthening safety across the entire development lifecycle of AI-driven robots, Halos will now provide: Safety extension packages for the NVIDIA IGX platform, enabling manufacturers to easily program safety functions into their robots, supported by TÜV Rheinland’s inspection of NVIDIA IGX. A robotic safety platform, which includes IGX and NVIDIA Holoscan Sensor Bridge for a unified approach to designing sensor-to-compute architecture with built-in AI safety. An outside-in safety AI inspector — an AI-powered agent for monitoring robot operations, helping improve worker safety. Europe’s Robotics Ecosystem Builds on NVIDIA’s Three Computers Europe’s leading robotics developers and solution providers are integrating the NVIDIA Isaac robotics platform to train, simulate and deploy robots across different embodiments. Agile Robots is post-training the GR00T N1 model in Isaac Lab to train its dual-arm manipulator robots, which run on NVIDIA Jetson hardware, to execute a variety of tasks in industrial environments. Meanwhile, idealworks has adopted the Mega NVIDIA Omniverse Blueprint for robotic fleet simulation to extend the blueprint’s capabilities to humanoids. Building on the VDA 5050 framework, idealworks contributes to the development of guidance that supports tasks uniquely enabled by humanoid robots, such as picking, moving and placing objects. Neura Robotics is integrating NVIDIA Isaac to further enhance its robot development workflows. The company is using GR00T-Mimic to post-train the Isaac GR00T N1 robot foundation model for its service robot MiPA. Neura is also collaborating with SAP and NVIDIA to integrate SAP’s Joule agents with its robots, using the Mega NVIDIA Omniverse Blueprint to simulate and refine robot behavior in complex, realistic operational scenarios before deployment. Vorwerk is using NVIDIA technologies to power its AI-driven collaborative robots. The company is post-training GR00T N1 models in Isaac Lab with its custom synthetic data pipeline, which is built on Isaac GR00T-Mimic and powered by the NVIDIA Omniverse platform. The enhanced models are then deployed on NVIDIA Jetson AGX, Jetson Orin or Jetson Thor modules for advanced, real-time home robotics. Humanoid is using NVIDIA’s full robotics stack, including Isaac Sim and Isaac Lab, to cut its prototyping time down by six weeks. The company is training its vision language action models on NVIDIA DGX B200 systems to boost the cognitive abilities of its robots, allowing them to operate autonomously in complex environments using Jetson Thor onboard computing. Universal Robots is introducing UR15, its fastest collaborative robot yet, to the European market. Using UR’s AI Accelerator — developed on NVIDIA Isaac’s CUDA-accelerated libraries and AI models, as well as NVIDIA Jetson AGX Orin — manufacturers can build AI applications to embed intelligence into the company’s new cobots. Wandelbots is showcasing its NOVA Operating System, now integrated with Omniverse, to simulate, validate and optimize robotic behaviors virtually before deploying them to physical robots. Wandelbots also announced a collaboration with EY and EDAG to offer manufacturers a scalable automation platform on Omniverse that speeds up the transition from proof of concept to full-scale deployment. Extend Robotics is using the Isaac GR00T platform to enable customers to control and train robots for industrial tasks like visual inspection and handling radioactive materials. The company’s Advanced Mechanics Assistance System lets users collect demonstration data and generate diverse synthetic datasets with NVIDIA GR00T-Mimic and GR00T-Gen to train the GR00T N1 foundation model. SICK is enhancing its autonomous perception solutions by integrating new certified sensor models — as well as 2D and 3D lidars, safety scanners and cameras — into NVIDIA Isaac Sim. This enables engineers to virtually design, test and validate machines using SICK’s sensing models within Omniverse, supporting processes spanning product development to large-scale robotic fleet management. Toyota Material Handling Europe is working with SoftServe to simulate its autonomous mobile robots working alongside human workers, using the Mega NVIDIA Omniverse Blueprint. Toyota Material Handling Europe is testing and simulating a multitude of traffic scenarios — allowing the company to refine its AI algorithms before real-world deployment. NVIDIA’s partner ecosystem is enabling European industries to tap into intelligent, AI-powered robotics. By harnessing advanced simulation, digital twins and generative AI, manufacturers are rapidly developing and deploying safe, adaptable robot fleets that address labor shortages, boost sustainability and drive operational efficiency. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. See notice regarding software product information. #european #robot #makers #adopt #nvidia
    BLOGS.NVIDIA.COM
    European Robot Makers Adopt NVIDIA Isaac, Omniverse and Halos to Develop Safe, Physical AI-Driven Robot Fleets
    In the face of growing labor shortages and need for sustainability, European manufacturers are racing to reinvent their processes to become software-defined and AI-driven. To achieve this, robot developers and industrial digitalization solution providers are working with NVIDIA to build safe, AI-driven robots and industrial technologies to drive modern, sustainable manufacturing. At NVIDIA GTC Paris at VivaTech, Europe’s leading robotics companies including Agile Robots, Extend Robotics, Humanoid, idealworks, Neura Robotics, SICK, Universal Robots, Vorwerk and Wandelbots are showcasing their latest AI-driven robots and automation breakthroughs, all accelerated by NVIDIA technologies. In addition, NVIDIA is releasing new models and tools to support the entire robotics ecosystem. NVIDIA Releases Tools for Accelerating Robot Development and Safety NVIDIA Isaac GR00T N1.5, an open foundation model for humanoid robot reasoning and skills, is now available for download on Hugging Face. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. The NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 open-source robotics simulation and learning frameworks, optimized for NVIDIA RTX PRO 6000 workstations, are available on GitHub for developer preview. In addition, NVIDIA announced that NVIDIA Halos — a full-stack, comprehensive safety system that unifies hardware architecture, AI models, software, tools and services — now expands to robotics, promoting safety across the entire development lifecycle of AI-driven robots. The NVIDIA Halos AI Systems Inspection Lab has earned accreditation from the ANSI National Accreditation Board (ANAB) to perform inspections across functional safety for robotics, in addition to automotive vehicles. “NVIDIA’s latest evaluation with ANAB verifies the demonstration of competence and compliance with internationally recognized standards, helping ensure that developers of autonomous machines — from automotive to robotics — can meet the highest benchmarks for functional safety,” said R. Douglas Leonard Jr., executive director of ANAB. Arcbest, Advantech, Bluewhite, Boston Dynamics, FORT, Inxpect, KION, NexCobot — a NEXCOM company, and Synapticon are among the first robotics companies to join the Halos Inspection Lab, ensuring their products meet NVIDIA safety and cybersecurity requirements. To support robotics leaders in strengthening safety across the entire development lifecycle of AI-driven robots, Halos will now provide: Safety extension packages for the NVIDIA IGX platform, enabling manufacturers to easily program safety functions into their robots, supported by TÜV Rheinland’s inspection of NVIDIA IGX. A robotic safety platform, which includes IGX and NVIDIA Holoscan Sensor Bridge for a unified approach to designing sensor-to-compute architecture with built-in AI safety. An outside-in safety AI inspector — an AI-powered agent for monitoring robot operations, helping improve worker safety. Europe’s Robotics Ecosystem Builds on NVIDIA’s Three Computers Europe’s leading robotics developers and solution providers are integrating the NVIDIA Isaac robotics platform to train, simulate and deploy robots across different embodiments. Agile Robots is post-training the GR00T N1 model in Isaac Lab to train its dual-arm manipulator robots, which run on NVIDIA Jetson hardware, to execute a variety of tasks in industrial environments. Meanwhile, idealworks has adopted the Mega NVIDIA Omniverse Blueprint for robotic fleet simulation to extend the blueprint’s capabilities to humanoids. Building on the VDA 5050 framework, idealworks contributes to the development of guidance that supports tasks uniquely enabled by humanoid robots, such as picking, moving and placing objects. Neura Robotics is integrating NVIDIA Isaac to further enhance its robot development workflows. The company is using GR00T-Mimic to post-train the Isaac GR00T N1 robot foundation model for its service robot MiPA. Neura is also collaborating with SAP and NVIDIA to integrate SAP’s Joule agents with its robots, using the Mega NVIDIA Omniverse Blueprint to simulate and refine robot behavior in complex, realistic operational scenarios before deployment. Vorwerk is using NVIDIA technologies to power its AI-driven collaborative robots. The company is post-training GR00T N1 models in Isaac Lab with its custom synthetic data pipeline, which is built on Isaac GR00T-Mimic and powered by the NVIDIA Omniverse platform. The enhanced models are then deployed on NVIDIA Jetson AGX, Jetson Orin or Jetson Thor modules for advanced, real-time home robotics. Humanoid is using NVIDIA’s full robotics stack, including Isaac Sim and Isaac Lab, to cut its prototyping time down by six weeks. The company is training its vision language action models on NVIDIA DGX B200 systems to boost the cognitive abilities of its robots, allowing them to operate autonomously in complex environments using Jetson Thor onboard computing. Universal Robots is introducing UR15, its fastest collaborative robot yet, to the European market. Using UR’s AI Accelerator — developed on NVIDIA Isaac’s CUDA-accelerated libraries and AI models, as well as NVIDIA Jetson AGX Orin — manufacturers can build AI applications to embed intelligence into the company’s new cobots. Wandelbots is showcasing its NOVA Operating System, now integrated with Omniverse, to simulate, validate and optimize robotic behaviors virtually before deploying them to physical robots. Wandelbots also announced a collaboration with EY and EDAG to offer manufacturers a scalable automation platform on Omniverse that speeds up the transition from proof of concept to full-scale deployment. Extend Robotics is using the Isaac GR00T platform to enable customers to control and train robots for industrial tasks like visual inspection and handling radioactive materials. The company’s Advanced Mechanics Assistance System lets users collect demonstration data and generate diverse synthetic datasets with NVIDIA GR00T-Mimic and GR00T-Gen to train the GR00T N1 foundation model. SICK is enhancing its autonomous perception solutions by integrating new certified sensor models — as well as 2D and 3D lidars, safety scanners and cameras — into NVIDIA Isaac Sim. This enables engineers to virtually design, test and validate machines using SICK’s sensing models within Omniverse, supporting processes spanning product development to large-scale robotic fleet management. Toyota Material Handling Europe is working with SoftServe to simulate its autonomous mobile robots working alongside human workers, using the Mega NVIDIA Omniverse Blueprint. Toyota Material Handling Europe is testing and simulating a multitude of traffic scenarios — allowing the company to refine its AI algorithms before real-world deployment. NVIDIA’s partner ecosystem is enabling European industries to tap into intelligent, AI-powered robotics. By harnessing advanced simulation, digital twins and generative AI, manufacturers are rapidly developing and deploying safe, adaptable robot fleets that address labor shortages, boost sustainability and drive operational efficiency. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. See notice regarding software product information.
    Like
    Love
    Wow
    Angry
    15
    0 Comments 0 Shares
  • Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid

    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand.
    Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation.
    At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics.
    Future use cases for AEON include:

    Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio.
    Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings.
    Part inspection, which includes checking parts for defects or ensuring adherence to specifications.
    Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners.

    “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.”

    Using NVIDIA’s Three Computers to Develop AEON 
    To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models.
    Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations.
    AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning.


    This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment.
    In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation.
    “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.”
    Data Comes to Life Through Reality Capture and Omniverse Integration 
    AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas.

    Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure.
    “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.”
    AEON’s Next Steps
    By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON.
    This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data.
    Watch the Hexagon LIVE keynote, explore presentations and read more about AEON.
    All imagery courtesy of Hexagon.
    #hexagon #taps #nvidia #robotics #software
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio. Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon. #hexagon #taps #nvidia #robotics #software
    BLOGS.NVIDIA.COM
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Reality (HxDR) platform powering Hexagon Reality Cloud Studio (RCS). Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. https://blogs.nvidia.com/wp-content/uploads/2025/06/Copy-of-robotics-hxgn-live-blog-1920x1080-1.mp4 This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon.
    Like
    Love
    Wow
    Sad
    Angry
    38
    0 Comments 0 Shares
  • NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica

    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth.
    Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI.
    This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany.
    NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics.
    NVIDIA Technologies Boost Robotics Development 
    Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics.
    To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks.
    To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data.
    In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub.
    Image courtesy of Wandelbots.
    Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More 
    Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots.
    NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment.
    NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies.
    Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows.
    Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact.
    Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations.
    Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries.
    Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic.
    Image courtesy of Franka Robotics.
    Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support.
    Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies.
    SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario.
    Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation.

    Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications.
    Image courtesy of Vention.
    Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    #nvidia #partners #highlight #nextgeneration #robotics
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27.  #nvidia #partners #highlight #nextgeneration #robotics
    BLOGS.NVIDIA.COM
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a $200 billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3 (FR3) robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    Like
    Love
    Wow
    Sad
    Angry
    19
    0 Comments 0 Shares
  • The stunning reversal of humanity’s oldest bias

    Perhaps the oldest, most pernicious form of human bias is that of men toward women. It often started at the moment of birth. In ancient Athens, at a public ceremony called the amphidromia, fathers would inspect a newborn and decide whether it would be part of the family, or be cast away. One often socially acceptable reason for abandoning the baby: It was a girl. Female infanticide has been distressingly common in many societies — and its practice is not just ancient history. In 1990, the Nobel Prize-winning economist Amartya Sen looked at birth ratios in Asia, North Africa, and China and calculated that more than 100 million women were essentially “missing” — meaning that, based on the normal ratio of boys to girls at birth and the longevity of both genders, there was a huge missing number of girls who should have been born, but weren’t. Sen’s estimate came before the truly widespread adoption of ultrasound tests that could determine the sex of a fetus in utero — which actually made the problem worse, leading to a wave of sex-selective abortions. These were especially common in countries like India and China; the latter’s one-child policy and old biases made families desperate for their one child to be a boy. The Economist has estimated that since 1980 alone, there have been approximately 50 million fewer girls born worldwide than would naturally be expected, which almost certainly means that roughly that nearly all of those girls were aborted for no other reason than their sex. The preference for boys was a bias that killed in mass numbers.But in one of the most important social shifts of our time, that bias is changing. In a great cover story earlier this month, The Economist reported that the number of annual excess male births has fallen from a peak of 1.7 million in 2000 to around 200,000, which puts it back within the biologically standard birth ratio of 105 boys for every 100 girls. Countries that once had highly skewed sex ratios — like South Korea, which saw almost 116 boys born for every 100 girls in 1990 — now have normal or near-normal ratios. Altogether, The Economist estimated that the decline in sex preference at birth in the past 25 years has saved the equivalent of 7 million girls. That’s comparable to the number of lives saved by anti-smoking efforts in the US. So how, exactly, have we overcome a prejudice that seemed so embedded in human society?Success in school and the workplaceFor one, we have relaxed discrimination against girls and women in other ways — in school and in the workplace. With fewer limits, girls are outperforming boys in the classroom. In the most recent international PISA tests, considered the gold standard for evaluating student performance around the world, 15-year-old girls beat their male counterparts in reading in 79 out of 81 participating countries or economies, while the historic male advantage in math scores has fallen to single digits. Girls are also dominating in higher education, with 113 female students at that level for every 100 male students. While women continue to earn less than men, the gender pay gap has been shrinking, and in a number of urban areas in the US, young women have actually been outearning young men. Government policies have helped accelerate that shift, in part because they have come to recognize the serious social problems that eventually result from decades of anti-girl discrimination. In countries like South Korea and China, which have long had some of the most skewed gender ratios at birth, governments have cracked down on technologies that enable sex-selective abortion. In India, where female infanticide and neglect have been particularly horrific, slogans like “the Daughter, Educate the Daughter” have helped change opinions. A changing preferenceThe shift is being seen not just in birth sex ratios, but in opinion polls — and in the actions of would-be parents.Between 1983 and 2003, The Economist reported, the proportion of South Korean women who said it was “necessary” to have a son fell from 48 percent to 6 percent, while nearly half of women now say they want daughters. In Japan, the shift has gone even further — as far back as 2002, 75 percent of couples who wanted only one child said they hoped for a daughter.In the US, which allows sex selection for couples doing in-vitro fertilization, there is growing evidence that would-be parents prefer girls, as do potential adoptive parents. While in the past, parents who had a girl first were more likely to keep trying to have children in an effort to have a boy, the opposite is now true — couples who have a girl first are less likely to keep trying. A more equal futureThere’s still more progress to be made. In northwest of India, for instance, birth ratios that overly skew toward boys are still the norm. In regions of sub-Saharan Africa, birth sex ratios may be relatively normal, but post-birth discrimination in the form of poorer nutrition and worse medical care still lingers. And course, women around the world are still subject to unacceptable levels of violence and discrimination from men.And some of the reasons for this shift may not be as high-minded as we’d like to think. Boys around the world are struggling in the modern era. They increasingly underperform in education, are more likely to be involved in violent crime, and in general, are failing to launch into adulthood. In the US, 20 percent of American men between 25 and 34 still live with their parents, compared to 15 percent of similarly aged women. It also seems to be the case that at least some of the increasing preference for girls is rooted in sexist stereotypes. Parents around the world may now prefer girls partly because they see them as more likely to take care of them in their old age — meaning a different kind of bias against women, that they are more natural caretakers, may be paradoxically driving the decline in prejudice against girls at birth.But make no mistake — the decline of boy preference is a clear mark of social progress, one measured in millions of girls’ lives saved. And maybe one Father’s Day, not too long from now, we’ll reach the point where daughters and sons are simply children: equally loved and equally welcomed.A version of this story originally appeared in the Good News newsletter. Sign up here!See More:
    #stunning #reversal #humanitys #oldest #bias
    The stunning reversal of humanity’s oldest bias
    Perhaps the oldest, most pernicious form of human bias is that of men toward women. It often started at the moment of birth. In ancient Athens, at a public ceremony called the amphidromia, fathers would inspect a newborn and decide whether it would be part of the family, or be cast away. One often socially acceptable reason for abandoning the baby: It was a girl. Female infanticide has been distressingly common in many societies — and its practice is not just ancient history. In 1990, the Nobel Prize-winning economist Amartya Sen looked at birth ratios in Asia, North Africa, and China and calculated that more than 100 million women were essentially “missing” — meaning that, based on the normal ratio of boys to girls at birth and the longevity of both genders, there was a huge missing number of girls who should have been born, but weren’t. Sen’s estimate came before the truly widespread adoption of ultrasound tests that could determine the sex of a fetus in utero — which actually made the problem worse, leading to a wave of sex-selective abortions. These were especially common in countries like India and China; the latter’s one-child policy and old biases made families desperate for their one child to be a boy. The Economist has estimated that since 1980 alone, there have been approximately 50 million fewer girls born worldwide than would naturally be expected, which almost certainly means that roughly that nearly all of those girls were aborted for no other reason than their sex. The preference for boys was a bias that killed in mass numbers.But in one of the most important social shifts of our time, that bias is changing. In a great cover story earlier this month, The Economist reported that the number of annual excess male births has fallen from a peak of 1.7 million in 2000 to around 200,000, which puts it back within the biologically standard birth ratio of 105 boys for every 100 girls. Countries that once had highly skewed sex ratios — like South Korea, which saw almost 116 boys born for every 100 girls in 1990 — now have normal or near-normal ratios. Altogether, The Economist estimated that the decline in sex preference at birth in the past 25 years has saved the equivalent of 7 million girls. That’s comparable to the number of lives saved by anti-smoking efforts in the US. So how, exactly, have we overcome a prejudice that seemed so embedded in human society?Success in school and the workplaceFor one, we have relaxed discrimination against girls and women in other ways — in school and in the workplace. With fewer limits, girls are outperforming boys in the classroom. In the most recent international PISA tests, considered the gold standard for evaluating student performance around the world, 15-year-old girls beat their male counterparts in reading in 79 out of 81 participating countries or economies, while the historic male advantage in math scores has fallen to single digits. Girls are also dominating in higher education, with 113 female students at that level for every 100 male students. While women continue to earn less than men, the gender pay gap has been shrinking, and in a number of urban areas in the US, young women have actually been outearning young men. Government policies have helped accelerate that shift, in part because they have come to recognize the serious social problems that eventually result from decades of anti-girl discrimination. In countries like South Korea and China, which have long had some of the most skewed gender ratios at birth, governments have cracked down on technologies that enable sex-selective abortion. In India, where female infanticide and neglect have been particularly horrific, slogans like “the Daughter, Educate the Daughter” have helped change opinions. A changing preferenceThe shift is being seen not just in birth sex ratios, but in opinion polls — and in the actions of would-be parents.Between 1983 and 2003, The Economist reported, the proportion of South Korean women who said it was “necessary” to have a son fell from 48 percent to 6 percent, while nearly half of women now say they want daughters. In Japan, the shift has gone even further — as far back as 2002, 75 percent of couples who wanted only one child said they hoped for a daughter.In the US, which allows sex selection for couples doing in-vitro fertilization, there is growing evidence that would-be parents prefer girls, as do potential adoptive parents. While in the past, parents who had a girl first were more likely to keep trying to have children in an effort to have a boy, the opposite is now true — couples who have a girl first are less likely to keep trying. A more equal futureThere’s still more progress to be made. In northwest of India, for instance, birth ratios that overly skew toward boys are still the norm. In regions of sub-Saharan Africa, birth sex ratios may be relatively normal, but post-birth discrimination in the form of poorer nutrition and worse medical care still lingers. And course, women around the world are still subject to unacceptable levels of violence and discrimination from men.And some of the reasons for this shift may not be as high-minded as we’d like to think. Boys around the world are struggling in the modern era. They increasingly underperform in education, are more likely to be involved in violent crime, and in general, are failing to launch into adulthood. In the US, 20 percent of American men between 25 and 34 still live with their parents, compared to 15 percent of similarly aged women. It also seems to be the case that at least some of the increasing preference for girls is rooted in sexist stereotypes. Parents around the world may now prefer girls partly because they see them as more likely to take care of them in their old age — meaning a different kind of bias against women, that they are more natural caretakers, may be paradoxically driving the decline in prejudice against girls at birth.But make no mistake — the decline of boy preference is a clear mark of social progress, one measured in millions of girls’ lives saved. And maybe one Father’s Day, not too long from now, we’ll reach the point where daughters and sons are simply children: equally loved and equally welcomed.A version of this story originally appeared in the Good News newsletter. Sign up here!See More: #stunning #reversal #humanitys #oldest #bias
    WWW.VOX.COM
    The stunning reversal of humanity’s oldest bias
    Perhaps the oldest, most pernicious form of human bias is that of men toward women. It often started at the moment of birth. In ancient Athens, at a public ceremony called the amphidromia, fathers would inspect a newborn and decide whether it would be part of the family, or be cast away. One often socially acceptable reason for abandoning the baby: It was a girl. Female infanticide has been distressingly common in many societies — and its practice is not just ancient history. In 1990, the Nobel Prize-winning economist Amartya Sen looked at birth ratios in Asia, North Africa, and China and calculated that more than 100 million women were essentially “missing” — meaning that, based on the normal ratio of boys to girls at birth and the longevity of both genders, there was a huge missing number of girls who should have been born, but weren’t. Sen’s estimate came before the truly widespread adoption of ultrasound tests that could determine the sex of a fetus in utero — which actually made the problem worse, leading to a wave of sex-selective abortions. These were especially common in countries like India and China; the latter’s one-child policy and old biases made families desperate for their one child to be a boy. The Economist has estimated that since 1980 alone, there have been approximately 50 million fewer girls born worldwide than would naturally be expected, which almost certainly means that roughly that nearly all of those girls were aborted for no other reason than their sex. The preference for boys was a bias that killed in mass numbers.But in one of the most important social shifts of our time, that bias is changing. In a great cover story earlier this month, The Economist reported that the number of annual excess male births has fallen from a peak of 1.7 million in 2000 to around 200,000, which puts it back within the biologically standard birth ratio of 105 boys for every 100 girls. Countries that once had highly skewed sex ratios — like South Korea, which saw almost 116 boys born for every 100 girls in 1990 — now have normal or near-normal ratios. Altogether, The Economist estimated that the decline in sex preference at birth in the past 25 years has saved the equivalent of 7 million girls. That’s comparable to the number of lives saved by anti-smoking efforts in the US. So how, exactly, have we overcome a prejudice that seemed so embedded in human society?Success in school and the workplaceFor one, we have relaxed discrimination against girls and women in other ways — in school and in the workplace. With fewer limits, girls are outperforming boys in the classroom. In the most recent international PISA tests, considered the gold standard for evaluating student performance around the world, 15-year-old girls beat their male counterparts in reading in 79 out of 81 participating countries or economies, while the historic male advantage in math scores has fallen to single digits. Girls are also dominating in higher education, with 113 female students at that level for every 100 male students. While women continue to earn less than men, the gender pay gap has been shrinking, and in a number of urban areas in the US, young women have actually been outearning young men. Government policies have helped accelerate that shift, in part because they have come to recognize the serious social problems that eventually result from decades of anti-girl discrimination. In countries like South Korea and China, which have long had some of the most skewed gender ratios at birth, governments have cracked down on technologies that enable sex-selective abortion. In India, where female infanticide and neglect have been particularly horrific, slogans like “Save the Daughter, Educate the Daughter” have helped change opinions. A changing preferenceThe shift is being seen not just in birth sex ratios, but in opinion polls — and in the actions of would-be parents.Between 1983 and 2003, The Economist reported, the proportion of South Korean women who said it was “necessary” to have a son fell from 48 percent to 6 percent, while nearly half of women now say they want daughters. In Japan, the shift has gone even further — as far back as 2002, 75 percent of couples who wanted only one child said they hoped for a daughter.In the US, which allows sex selection for couples doing in-vitro fertilization, there is growing evidence that would-be parents prefer girls, as do potential adoptive parents. While in the past, parents who had a girl first were more likely to keep trying to have children in an effort to have a boy, the opposite is now true — couples who have a girl first are less likely to keep trying. A more equal futureThere’s still more progress to be made. In northwest of India, for instance, birth ratios that overly skew toward boys are still the norm. In regions of sub-Saharan Africa, birth sex ratios may be relatively normal, but post-birth discrimination in the form of poorer nutrition and worse medical care still lingers. And course, women around the world are still subject to unacceptable levels of violence and discrimination from men.And some of the reasons for this shift may not be as high-minded as we’d like to think. Boys around the world are struggling in the modern era. They increasingly underperform in education, are more likely to be involved in violent crime, and in general, are failing to launch into adulthood. In the US, 20 percent of American men between 25 and 34 still live with their parents, compared to 15 percent of similarly aged women. It also seems to be the case that at least some of the increasing preference for girls is rooted in sexist stereotypes. Parents around the world may now prefer girls partly because they see them as more likely to take care of them in their old age — meaning a different kind of bias against women, that they are more natural caretakers, may be paradoxically driving the decline in prejudice against girls at birth.But make no mistake — the decline of boy preference is a clear mark of social progress, one measured in millions of girls’ lives saved. And maybe one Father’s Day, not too long from now, we’ll reach the point where daughters and sons are simply children: equally loved and equally welcomed.A version of this story originally appeared in the Good News newsletter. Sign up here!See More:
    Like
    Love
    Wow
    Sad
    Angry
    525
    0 Comments 0 Shares
  • A short history of the roadblock

    Barricades, as we know them today, are thought to date back to the European wars of religion. According to most historians, the first barricade went up in Paris in 1588; the word derives from the French barriques, or barrels, spontaneously put together. They have been assembled from the most diverse materials, from cobblestones, tyres, newspapers, dead horses and bags of ice, to omnibuses and e‑scooters. Their tactical logic is close to that of guerrilla warfare: the authorities have to take the barricades in order to claim victory; all that those manning them have to do to prevail is to hold them. 
    The 19th century was the golden age for blocking narrow, labyrinthine streets. Paris had seen barricades go up nine times in the period before the Second Empire; during the July 1830 Revolution alone, 4,000 barricades had been erected. These barricades would not only stop, but also trap troops; people would then throw stones from windows or pour boiling water onto the streets. Georges‑Eugène Haussmann, Napoleon III’s prefect of Paris, famously created wide boulevards to make blocking by barricade more difficult and moving the military easier, and replaced cobblestones with macadam – a surface of crushed stone. As Flaubert observed in his Dictionary of Accepted Ideas: ‘Macadam: has cancelled revolutions. No more means to make barricades. Nevertheless rather inconvenient.’  
    Lead image: Barricades, as we know them today, are thought to have originated in early modern France. A colour engraving attributed to Achille‑Louis Martinet depicts the defence of a barricade during the 1830 July Revolution. Credit: Paris Musées / Musée Carnavalet – Histoire de Paris. Above: the socialist political thinker and activist Louis Auguste Blanqui – who was imprisoned by every regime that ruled France between 1815 and 1880 – drew instructions for how to build an effective barricade

    Under Napoleon III, Baron Haussmann widened Paris’s streets in his 1853–70 renovation of the city, making barricading more difficult
    Credit: Old Books Images / Alamy
    ‘On one hand,wanted to favour the circulation of ideas,’ reactionary intellectual Louis Veuillot observed apropos the ambiguous liberalism of the latter period of Napoleon III’s Second Empire. ‘On the other, to ensure the circulation of regiments.’ But ‘anti‑insurgency hardware’, as Justinien Tribillon has called it, also served to chase the working class out of the city centre: Haussmann’s projects amounted to a gigantic form of real-estate speculation, and the 1871 Paris Commune that followed constituted not just a short‑lived anarchist experiment featuring enormous barricades; it also signalled the return of the workers to the centre and, arguably, revenge for their dispossession.   
    By the mid‑19th century, observers questioned whether barricades still had practical meaning. Gottfried Semper’s barricade, constructed for the 1849 Dresden uprising, had proved unconquerable, but Friedrich Engels, one‑time ‘inspector of barricades’ in the Elberfeld insurrection of the same year, already suggested that the barricades’ primary meaning was now moral rather than military – a point to be echoed by Leon Trotsky in the subsequent century. Barricades symbolised bravery and the will to hold out among insurrectionists, and, not least, determination rather to destroy one’s possessions – and one’s neighbourhood – than put up with further oppression.  
    Not only self‑declared revolutionaries viewed things this way: the reformist Social Democrat leader Eduard Bernstein observed that ‘the barricade fight as a political weapon of the people has been completely eliminated due to changes in weapon technology and cities’ structures’. Bernstein was also picking up on the fact that, in the era of industrialisation, contention happened at least as much on the factory floor as on the streets. The strike, not the food riot or the defence of workers’ quartiers, became the paradigmatic form of conflict. Joshua Clover has pointed out in his 2016 book Riot. Strike. Riot: The New Era of Uprisings, that the price of labour, rather than the price of goods, caused people to confront the powerful. Blocking production grew more important than blocking the street.
    ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn’
    Today, it is again blocking – not just people streaming along the streets in large marches – that is prominently associated with protests. Disrupting circulation is not only an important gesture in the face of climate emergency; blocking transport is a powerful form of protest in an economic system focused on logistics and just‑in‑time distribution. Members of Insulate Britain and Germany’s Last Generation super‑glue themselves to streets to stop car traffic to draw attention to the climate emergency; they have also attached themselves to airport runways. They form a human barricade of sorts, immobilising traffic by making themselves immovable.  
    Today’s protesters have made themselves consciously vulnerable. They in fact follow the advice of US civil rights’ Bayard Rustin who explained: ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn.’ Making oneself vulnerable might increase the chances of a majority of citizens seeing the importance of the cause which those engaged in civil disobedience are pursuing. Demonstrations – even large, unpredictable ones – are no longer sufficient. They draw too little attention and do not compel a reaction. Naomi Klein proposed the term ‘blockadia’ as ‘a roving transnational conflict zone’ in which people block extraction – be it open‑pit mines, fracking sites or tar sands pipelines – with their bodies. More often than not, these blockades are organised by local people opposing the fossil fuel industry, not environmental activists per se. Blockadia came to denote resistance to the Keystone XL pipeline as well as Canada’s First Nations‑led movement Idle No More.
    In cities, blocking can be accomplished with highly mobile structures. Like the barricade of the 19th century, they can be quickly assembled, yet are difficult to move; unlike old‑style barricades, they can also be quickly disassembled, removed and hidden. Think of super tripods, intricate ‘protest beacons’ based on tensegrity principles, as well as inflatable cobblestones, pioneered by the artist‑activists of Tools for Action.  
    As recently as 1991, newly independent Latvia defended itself against Soviet tanks with the popular construction of barricades, in a series of confrontations that became known as the Barikādes
    Credit: Associated Press / Alamy
    Inversely, roadblocks can be used by police authorities to stop demonstrations and gatherings from taking place – protesters are seen removing such infrastructure in Dhaka during a general strike in 1999
    Credit: REUTERS / Rafiqur Rahman / Bridgeman
    These inflatable objects are highly flexible, but can also be protective against police batons. They pose an awkward challenge to the authorities, who often end up looking ridiculous when dealing with them, and, as one of the inventors pointed out, they are guaranteed to create a media spectacle. This was also true of the 19th‑century barricade: people posed for pictures in front of them. As Wolfgang Scheppe, a curator of Architecture of the Barricade, explains, these images helped the police to find Communards and mete out punishments after the end of the anarchist experiment.
    Much simpler structures can also be highly effective. In 2019, protesters in Hong Kong filled streets with little archways made from just three ordinary bricks: two standing upright, one resting on top. When touched, the falling top one would buttress the other two, and effectively block traffic. In line with their imperative of ‘be water’, protesters would retreat when the police appeared, but the ‘mini‑Stonehenges’ would remain and slow down the authorities.
    Today, elaborate architectures of protest, such as Extinction Rebellion’s ‘tensegrity towers’, are used to blockade roads and distribution networks – in this instance, Rupert Murdoch’s News UK printworks in Broxbourne, for the media group’s failure to report the climate emergency accurately
    Credit: Extinction Rebellion
    In June 2025, protests erupted in Los Angeles against the Trump administration’s deportation policies. Demonstrators barricaded downtown streets using various objects, including the pink public furniture designed by design firm Rios for Gloria Molina Grand Park. LAPD are seen advancing through tear gas
    Credit: Gina Ferazzi / Los Angeles Times via Getty Images
    Roads which radicals might want to target are not just ones in major metropoles and fancy post‑industrial downtowns. Rather, they might block the arteries leading to ‘fulfilment centres’ and harbours with container shipping. The model is not only Occupy Wall Street, which had initially called for the erection of ‘peaceful barricades’, but also the Occupy that led to the Oakland port shutdown in 2011. In short, such roadblocks disrupt what Phil Neel has called a ‘hinterland’ that is often invisible, yet crucial for contemporary capitalism. More recently, Extinction Rebellion targeted Amazon distribution centres in three European countries in November 2021; in the UK, they aimed to disrupt half of all deliveries on a Black Friday.  
    Will such blockades just anger consumers who, after all, are not present but are impatiently waiting for packages at home? One of the hopes associated with the traditional barricade was always that they might create spaces where protesters, police and previously indifferent citizens get talking; French theorists even expected them to become ‘a machine to produce the people’. That could be why military technology has evolved so that the authorities do not have to get close to the barricade: tear gas was first deployed against those on barricades before it was used in the First World War; so‑called riot control vehicles can ever more easily crush barricades. The challenge, then, for anyone who wishes to block is also how to get in other people’s faces – in order to have a chance to convince them of their cause.       

    2025-06-11
    Kristina Rapacki

    Share
    #short #history #roadblock
    A short history of the roadblock
    Barricades, as we know them today, are thought to date back to the European wars of religion. According to most historians, the first barricade went up in Paris in 1588; the word derives from the French barriques, or barrels, spontaneously put together. They have been assembled from the most diverse materials, from cobblestones, tyres, newspapers, dead horses and bags of ice, to omnibuses and e‑scooters. Their tactical logic is close to that of guerrilla warfare: the authorities have to take the barricades in order to claim victory; all that those manning them have to do to prevail is to hold them.  The 19th century was the golden age for blocking narrow, labyrinthine streets. Paris had seen barricades go up nine times in the period before the Second Empire; during the July 1830 Revolution alone, 4,000 barricades had been erected. These barricades would not only stop, but also trap troops; people would then throw stones from windows or pour boiling water onto the streets. Georges‑Eugène Haussmann, Napoleon III’s prefect of Paris, famously created wide boulevards to make blocking by barricade more difficult and moving the military easier, and replaced cobblestones with macadam – a surface of crushed stone. As Flaubert observed in his Dictionary of Accepted Ideas: ‘Macadam: has cancelled revolutions. No more means to make barricades. Nevertheless rather inconvenient.’   Lead image: Barricades, as we know them today, are thought to have originated in early modern France. A colour engraving attributed to Achille‑Louis Martinet depicts the defence of a barricade during the 1830 July Revolution. Credit: Paris Musées / Musée Carnavalet – Histoire de Paris. Above: the socialist political thinker and activist Louis Auguste Blanqui – who was imprisoned by every regime that ruled France between 1815 and 1880 – drew instructions for how to build an effective barricade Under Napoleon III, Baron Haussmann widened Paris’s streets in his 1853–70 renovation of the city, making barricading more difficult Credit: Old Books Images / Alamy ‘On one hand,wanted to favour the circulation of ideas,’ reactionary intellectual Louis Veuillot observed apropos the ambiguous liberalism of the latter period of Napoleon III’s Second Empire. ‘On the other, to ensure the circulation of regiments.’ But ‘anti‑insurgency hardware’, as Justinien Tribillon has called it, also served to chase the working class out of the city centre: Haussmann’s projects amounted to a gigantic form of real-estate speculation, and the 1871 Paris Commune that followed constituted not just a short‑lived anarchist experiment featuring enormous barricades; it also signalled the return of the workers to the centre and, arguably, revenge for their dispossession.    By the mid‑19th century, observers questioned whether barricades still had practical meaning. Gottfried Semper’s barricade, constructed for the 1849 Dresden uprising, had proved unconquerable, but Friedrich Engels, one‑time ‘inspector of barricades’ in the Elberfeld insurrection of the same year, already suggested that the barricades’ primary meaning was now moral rather than military – a point to be echoed by Leon Trotsky in the subsequent century. Barricades symbolised bravery and the will to hold out among insurrectionists, and, not least, determination rather to destroy one’s possessions – and one’s neighbourhood – than put up with further oppression.   Not only self‑declared revolutionaries viewed things this way: the reformist Social Democrat leader Eduard Bernstein observed that ‘the barricade fight as a political weapon of the people has been completely eliminated due to changes in weapon technology and cities’ structures’. Bernstein was also picking up on the fact that, in the era of industrialisation, contention happened at least as much on the factory floor as on the streets. The strike, not the food riot or the defence of workers’ quartiers, became the paradigmatic form of conflict. Joshua Clover has pointed out in his 2016 book Riot. Strike. Riot: The New Era of Uprisings, that the price of labour, rather than the price of goods, caused people to confront the powerful. Blocking production grew more important than blocking the street. ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn’ Today, it is again blocking – not just people streaming along the streets in large marches – that is prominently associated with protests. Disrupting circulation is not only an important gesture in the face of climate emergency; blocking transport is a powerful form of protest in an economic system focused on logistics and just‑in‑time distribution. Members of Insulate Britain and Germany’s Last Generation super‑glue themselves to streets to stop car traffic to draw attention to the climate emergency; they have also attached themselves to airport runways. They form a human barricade of sorts, immobilising traffic by making themselves immovable.   Today’s protesters have made themselves consciously vulnerable. They in fact follow the advice of US civil rights’ Bayard Rustin who explained: ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn.’ Making oneself vulnerable might increase the chances of a majority of citizens seeing the importance of the cause which those engaged in civil disobedience are pursuing. Demonstrations – even large, unpredictable ones – are no longer sufficient. They draw too little attention and do not compel a reaction. Naomi Klein proposed the term ‘blockadia’ as ‘a roving transnational conflict zone’ in which people block extraction – be it open‑pit mines, fracking sites or tar sands pipelines – with their bodies. More often than not, these blockades are organised by local people opposing the fossil fuel industry, not environmental activists per se. Blockadia came to denote resistance to the Keystone XL pipeline as well as Canada’s First Nations‑led movement Idle No More. In cities, blocking can be accomplished with highly mobile structures. Like the barricade of the 19th century, they can be quickly assembled, yet are difficult to move; unlike old‑style barricades, they can also be quickly disassembled, removed and hidden. Think of super tripods, intricate ‘protest beacons’ based on tensegrity principles, as well as inflatable cobblestones, pioneered by the artist‑activists of Tools for Action.   As recently as 1991, newly independent Latvia defended itself against Soviet tanks with the popular construction of barricades, in a series of confrontations that became known as the Barikādes Credit: Associated Press / Alamy Inversely, roadblocks can be used by police authorities to stop demonstrations and gatherings from taking place – protesters are seen removing such infrastructure in Dhaka during a general strike in 1999 Credit: REUTERS / Rafiqur Rahman / Bridgeman These inflatable objects are highly flexible, but can also be protective against police batons. They pose an awkward challenge to the authorities, who often end up looking ridiculous when dealing with them, and, as one of the inventors pointed out, they are guaranteed to create a media spectacle. This was also true of the 19th‑century barricade: people posed for pictures in front of them. As Wolfgang Scheppe, a curator of Architecture of the Barricade, explains, these images helped the police to find Communards and mete out punishments after the end of the anarchist experiment. Much simpler structures can also be highly effective. In 2019, protesters in Hong Kong filled streets with little archways made from just three ordinary bricks: two standing upright, one resting on top. When touched, the falling top one would buttress the other two, and effectively block traffic. In line with their imperative of ‘be water’, protesters would retreat when the police appeared, but the ‘mini‑Stonehenges’ would remain and slow down the authorities. Today, elaborate architectures of protest, such as Extinction Rebellion’s ‘tensegrity towers’, are used to blockade roads and distribution networks – in this instance, Rupert Murdoch’s News UK printworks in Broxbourne, for the media group’s failure to report the climate emergency accurately Credit: Extinction Rebellion In June 2025, protests erupted in Los Angeles against the Trump administration’s deportation policies. Demonstrators barricaded downtown streets using various objects, including the pink public furniture designed by design firm Rios for Gloria Molina Grand Park. LAPD are seen advancing through tear gas Credit: Gina Ferazzi / Los Angeles Times via Getty Images Roads which radicals might want to target are not just ones in major metropoles and fancy post‑industrial downtowns. Rather, they might block the arteries leading to ‘fulfilment centres’ and harbours with container shipping. The model is not only Occupy Wall Street, which had initially called for the erection of ‘peaceful barricades’, but also the Occupy that led to the Oakland port shutdown in 2011. In short, such roadblocks disrupt what Phil Neel has called a ‘hinterland’ that is often invisible, yet crucial for contemporary capitalism. More recently, Extinction Rebellion targeted Amazon distribution centres in three European countries in November 2021; in the UK, they aimed to disrupt half of all deliveries on a Black Friday.   Will such blockades just anger consumers who, after all, are not present but are impatiently waiting for packages at home? One of the hopes associated with the traditional barricade was always that they might create spaces where protesters, police and previously indifferent citizens get talking; French theorists even expected them to become ‘a machine to produce the people’. That could be why military technology has evolved so that the authorities do not have to get close to the barricade: tear gas was first deployed against those on barricades before it was used in the First World War; so‑called riot control vehicles can ever more easily crush barricades. The challenge, then, for anyone who wishes to block is also how to get in other people’s faces – in order to have a chance to convince them of their cause.        2025-06-11 Kristina Rapacki Share #short #history #roadblock
    WWW.ARCHITECTURAL-REVIEW.COM
    A short history of the roadblock
    Barricades, as we know them today, are thought to date back to the European wars of religion. According to most historians, the first barricade went up in Paris in 1588; the word derives from the French barriques, or barrels, spontaneously put together. They have been assembled from the most diverse materials, from cobblestones, tyres, newspapers, dead horses and bags of ice (during Kyiv’s Euromaidan in 2013–14), to omnibuses and e‑scooters. Their tactical logic is close to that of guerrilla warfare: the authorities have to take the barricades in order to claim victory; all that those manning them have to do to prevail is to hold them.  The 19th century was the golden age for blocking narrow, labyrinthine streets. Paris had seen barricades go up nine times in the period before the Second Empire; during the July 1830 Revolution alone, 4,000 barricades had been erected (roughly one for every 200 Parisians). These barricades would not only stop, but also trap troops; people would then throw stones from windows or pour boiling water onto the streets. Georges‑Eugène Haussmann, Napoleon III’s prefect of Paris, famously created wide boulevards to make blocking by barricade more difficult and moving the military easier, and replaced cobblestones with macadam – a surface of crushed stone. As Flaubert observed in his Dictionary of Accepted Ideas: ‘Macadam: has cancelled revolutions. No more means to make barricades. Nevertheless rather inconvenient.’   Lead image: Barricades, as we know them today, are thought to have originated in early modern France. A colour engraving attributed to Achille‑Louis Martinet depicts the defence of a barricade during the 1830 July Revolution. Credit: Paris Musées / Musée Carnavalet – Histoire de Paris. Above: the socialist political thinker and activist Louis Auguste Blanqui – who was imprisoned by every regime that ruled France between 1815 and 1880 – drew instructions for how to build an effective barricade Under Napoleon III, Baron Haussmann widened Paris’s streets in his 1853–70 renovation of the city, making barricading more difficult Credit: Old Books Images / Alamy ‘On one hand, [the authorities] wanted to favour the circulation of ideas,’ reactionary intellectual Louis Veuillot observed apropos the ambiguous liberalism of the latter period of Napoleon III’s Second Empire. ‘On the other, to ensure the circulation of regiments.’ But ‘anti‑insurgency hardware’, as Justinien Tribillon has called it, also served to chase the working class out of the city centre: Haussmann’s projects amounted to a gigantic form of real-estate speculation, and the 1871 Paris Commune that followed constituted not just a short‑lived anarchist experiment featuring enormous barricades; it also signalled the return of the workers to the centre and, arguably, revenge for their dispossession.    By the mid‑19th century, observers questioned whether barricades still had practical meaning. Gottfried Semper’s barricade, constructed for the 1849 Dresden uprising, had proved unconquerable, but Friedrich Engels, one‑time ‘inspector of barricades’ in the Elberfeld insurrection of the same year, already suggested that the barricades’ primary meaning was now moral rather than military – a point to be echoed by Leon Trotsky in the subsequent century. Barricades symbolised bravery and the will to hold out among insurrectionists, and, not least, determination rather to destroy one’s possessions – and one’s neighbourhood – than put up with further oppression.   Not only self‑declared revolutionaries viewed things this way: the reformist Social Democrat leader Eduard Bernstein observed that ‘the barricade fight as a political weapon of the people has been completely eliminated due to changes in weapon technology and cities’ structures’. Bernstein was also picking up on the fact that, in the era of industrialisation, contention happened at least as much on the factory floor as on the streets. The strike, not the food riot or the defence of workers’ quartiers, became the paradigmatic form of conflict. Joshua Clover has pointed out in his 2016 book Riot. Strike. Riot: The New Era of Uprisings, that the price of labour, rather than the price of goods, caused people to confront the powerful. Blocking production grew more important than blocking the street. ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn’ Today, it is again blocking – not just people streaming along the streets in large marches – that is prominently associated with protests. Disrupting circulation is not only an important gesture in the face of climate emergency; blocking transport is a powerful form of protest in an economic system focused on logistics and just‑in‑time distribution. Members of Insulate Britain and Germany’s Last Generation super‑glue themselves to streets to stop car traffic to draw attention to the climate emergency; they have also attached themselves to airport runways. They form a human barricade of sorts, immobilising traffic by making themselves immovable.   Today’s protesters have made themselves consciously vulnerable. They in fact follow the advice of US civil rights’ Bayard Rustin who explained: ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn.’ Making oneself vulnerable might increase the chances of a majority of citizens seeing the importance of the cause which those engaged in civil disobedience are pursuing. Demonstrations – even large, unpredictable ones – are no longer sufficient. They draw too little attention and do not compel a reaction. Naomi Klein proposed the term ‘blockadia’ as ‘a roving transnational conflict zone’ in which people block extraction – be it open‑pit mines, fracking sites or tar sands pipelines – with their bodies. More often than not, these blockades are organised by local people opposing the fossil fuel industry, not environmental activists per se. Blockadia came to denote resistance to the Keystone XL pipeline as well as Canada’s First Nations‑led movement Idle No More. In cities, blocking can be accomplished with highly mobile structures. Like the barricade of the 19th century, they can be quickly assembled, yet are difficult to move; unlike old‑style barricades, they can also be quickly disassembled, removed and hidden (by those who have the engineering and architectural know‑how). Think of super tripods, intricate ‘protest beacons’ based on tensegrity principles, as well as inflatable cobblestones, pioneered by the artist‑activists of Tools for Action (and as analysed in Nick Newman’s recent volume Protest Architecture).   As recently as 1991, newly independent Latvia defended itself against Soviet tanks with the popular construction of barricades, in a series of confrontations that became known as the Barikādes Credit: Associated Press / Alamy Inversely, roadblocks can be used by police authorities to stop demonstrations and gatherings from taking place – protesters are seen removing such infrastructure in Dhaka during a general strike in 1999 Credit: REUTERS / Rafiqur Rahman / Bridgeman These inflatable objects are highly flexible, but can also be protective against police batons. They pose an awkward challenge to the authorities, who often end up looking ridiculous when dealing with them, and, as one of the inventors pointed out, they are guaranteed to create a media spectacle. This was also true of the 19th‑century barricade: people posed for pictures in front of them. As Wolfgang Scheppe, a curator of Architecture of the Barricade (currently on display at the Arsenale Institute for Politics of Representation in Venice), explains, these images helped the police to find Communards and mete out punishments after the end of the anarchist experiment. Much simpler structures can also be highly effective. In 2019, protesters in Hong Kong filled streets with little archways made from just three ordinary bricks: two standing upright, one resting on top. When touched, the falling top one would buttress the other two, and effectively block traffic. In line with their imperative of ‘be water’, protesters would retreat when the police appeared, but the ‘mini‑Stonehenges’ would remain and slow down the authorities. Today, elaborate architectures of protest, such as Extinction Rebellion’s ‘tensegrity towers’, are used to blockade roads and distribution networks – in this instance, Rupert Murdoch’s News UK printworks in Broxbourne, for the media group’s failure to report the climate emergency accurately Credit: Extinction Rebellion In June 2025, protests erupted in Los Angeles against the Trump administration’s deportation policies. Demonstrators barricaded downtown streets using various objects, including the pink public furniture designed by design firm Rios for Gloria Molina Grand Park. LAPD are seen advancing through tear gas Credit: Gina Ferazzi / Los Angeles Times via Getty Images Roads which radicals might want to target are not just ones in major metropoles and fancy post‑industrial downtowns. Rather, they might block the arteries leading to ‘fulfilment centres’ and harbours with container shipping. The model is not only Occupy Wall Street, which had initially called for the erection of ‘peaceful barricades’, but also the Occupy that led to the Oakland port shutdown in 2011. In short, such roadblocks disrupt what Phil Neel has called a ‘hinterland’ that is often invisible, yet crucial for contemporary capitalism. More recently, Extinction Rebellion targeted Amazon distribution centres in three European countries in November 2021; in the UK, they aimed to disrupt half of all deliveries on a Black Friday.   Will such blockades just anger consumers who, after all, are not present but are impatiently waiting for packages at home? One of the hopes associated with the traditional barricade was always that they might create spaces where protesters, police and previously indifferent citizens get talking; French theorists even expected them to become ‘a machine to produce the people’. That could be why military technology has evolved so that the authorities do not have to get close to the barricade: tear gas was first deployed against those on barricades before it was used in the First World War; so‑called riot control vehicles can ever more easily crush barricades. The challenge, then, for anyone who wishes to block is also how to get in other people’s faces – in order to have a chance to convince them of their cause.        2025-06-11 Kristina Rapacki Share
    0 Comments 0 Shares
  • UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge

    Researchers from the US-based University of Massachusetts Amherst, in collaboration with the Massachusetts Institute of TechnologyDepartment of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques.
    “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added.
    The pilot project is also a collaboration with the Massachusetts Department of Transportation, the Massachusetts Technology Collaborative, the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration.
    Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst.
    Tackling America’s Bridge Crisis with Cold Spray Technology
    Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed billion—well beyond current funding levels. 
    The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge.
    One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us toon this actual bridge while cars are going.”
    To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan.
    Next steps: Testing Cold-Sprayed Repairs
    The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests.
    “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.”
    3D Printing for Infrastructure Repairs
    Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College Londonhave developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained.
    Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters.
    Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now.
    Who won the2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news.
    You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst.
    #umass #mit #test #cold #spray
    UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge
    Researchers from the US-based University of Massachusetts Amherst, in collaboration with the Massachusetts Institute of TechnologyDepartment of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques. “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added. The pilot project is also a collaboration with the Massachusetts Department of Transportation, the Massachusetts Technology Collaborative, the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration. Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst. Tackling America’s Bridge Crisis with Cold Spray Technology Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed billion—well beyond current funding levels.  The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge. One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us toon this actual bridge while cars are going.” To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan. Next steps: Testing Cold-Sprayed Repairs The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests. “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.” 3D Printing for Infrastructure Repairs Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College Londonhave developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained. Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters. Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst. #umass #mit #test #cold #spray
    3DPRINTINGINDUSTRY.COM
    UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge
    Researchers from the US-based University of Massachusetts Amherst (UMass), in collaboration with the Massachusetts Institute of Technology (MIT) Department of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques. “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added. The pilot project is also a collaboration with the Massachusetts Department of Transportation (MassDOT), the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration. Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis (left, standing). Photo via UMass Amherst. Tackling America’s Bridge Crisis with Cold Spray Technology Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed $190 billion—well beyond current funding levels.  The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge. One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us to [apply the technique] on this actual bridge while cars are going [across].” To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan. Next steps: Testing Cold-Sprayed Repairs The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests. “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.” 3D Printing for Infrastructure Repairs Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College London (UCL) have developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained. Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters. Join our Additive Manufacturing Advantage (AMAA) event on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis (left, standing). Photo via UMass Amherst.
    0 Comments 0 Shares
  • Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    WWW.ILM.COM
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Comments 0 Shares
  • Decoding The SVG <code>path</code> Element: Line Commands

    In a previous article, we looked at some practical examples of how to code SVG by hand. In that guide, we covered the basics of the SVG elements rect, circle, ellipse, line, polyline, and polygon.
    This time around, we are going to tackle a more advanced topic, the absolute powerhouse of SVG elements: path. Don’t get me wrong; I still stand by my point that image paths are better drawn in vector programs than coded. But when it comes to technical drawings and data visualizations, the path element unlocks a wide array of possibilities and opens up the world of hand-coded SVGs.
    The path syntax can be really complex. We’re going to tackle it in two separate parts. In this first installment, we’re learning all about straight and angular paths. In the second part, we’ll make lines bend, twist, and turn.
    Required Knowledge And Guide Structure
    Note: If you are unfamiliar with the basics of SVG, such as the subject of viewBox and the basic syntax of the simple elements, I recommend reading my guide before diving into this one. You should also familiarize yourself with <text> if you want to understand each line of code in the examples.
    Before we get started, I want to quickly recap how I code SVG using JavaScript. I don’t like dealing with numbers and math, and reading SVG Code with numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and write out, I have a much better time with this type of code, and I think you will, too.
    The goal of this article is more about understanding path syntax than it is about doing placement or how to leverage loops and other more basic things. So, I will not run you through the entire setup of each example. I’ll instead share snippets of the code, but they may be slightly adjusted from the CodePen or simplified to make this article easier to read. However, if there are specific questions about code that are not part of the text in the CodePen demos, the comment section is open.
    To keep this all framework-agnostic, the code is written in vanilla JavaScript.
    Setting Up For Success
    As the path element relies on our understanding of some of the coordinates we plug into the commands, I think it is a lot easier if we have a bit of visual orientation. So, all of the examples will be coded on top of a visual representation of a traditional viewBox setup with the origin in the top-left corner, then moves diagonally down to. The command is: M10 10 L100 100.
    The blue line is horizontal. It starts atand should end at. We could use the L command, but we’d have to write 55 again. So, instead, we write M10 55 H100, and then SVG knows to look back at the y value of M for the y value of H.
    It’s the same thing for the green line, but when we use the V command, SVG knows to refer back to the x value of M for the x value of V.
    If we compare the resulting horizontal path with the same implementation in a <line> element, we may

    Notice how much more efficient path can be, and
    Remove quite a bit of meaning for anyone who doesn’t speak path.

    Because, as we look at these strings, one of them is called “line”. And while the rest doesn’t mean anything out of context, the line definitely conjures a specific image in our heads.
    <path d="M 10 55 H 100" />
    <line x1="10" y1="55" x2="100" y2="55" />

    Making Polygons And Polylines With Z
    In the previous section, we learned how path can behave like <line>, which is pretty cool. But it can do more. It can also act like polyline and polygon.
    Remember, how those two basically work the same, but polygon connects the first and last point, while polyline does not? The path element can do the same thing. There is a separate command to close the path with a line, which is the Z command.

    const polyline2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y};
    const polygon2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y} Z;

    So, let’s see this in action and create a repeating triangle shape. Every odd time, it’s open, and every even time, it’s closed. Pretty neat!
    See the Pen Alternating Trianglesby Myriam.
    When it comes to comparing path versus polygon and polyline, the other tags tell us about their names, but I would argue that fewer people know what a polygon is versus what a line is. The argument to use these two tags over path for legibility is weak, in my opinion, and I guess you’d probably agree that this looks like equal levels of meaningless string given to an SVG element.
    <path d="M0 0 L86.6 50 L0 100 Z" />
    <polygon points="0,0 86.6,50 0,100" />

    <path d="M0 0 L86.6 50 L0 100" />
    <polyline points="0,0 86.6,50 0,100" />

    Relative Commands: m, l, h, v
    All of the line commands exist in absolute and relative versions. The difference is that the relative commands are lowercase, e.g., m, l, h, and v. The relative commands are always relative to the last point, so instead of declaring an x value, you’re declaring a dx value, saying this is how many units you’re moving.
    Before we look at the example visually, I want you to look at the following three-line commands. Try not to look at the CodePen beforehand.
    const lines =;

    As I mentioned, I hate looking at numbers without meaning, but there is one number whose meaning is pretty constant in most contexts: 0. Seeing a 0 in combination with a command I just learned means relative manages to instantly tell me that nothing is happening. Seeing l 0 20 by itself tells me that this line only moves along one axis instead of two.
    And looking at that entire blue path command, the repeated 20 value gives me a sense that the shape might have some regularity to it. The first path does a bit of that by repeating 10 and 30. But the third? As someone who can’t do math in my head, that third string gives me nothing.
    Now, you might be surprised, but they all draw the same shape, just in different places.
    See the Pen SVG Compound Pathsby Myriam.
    So, how valuable is it that we can recognize the regularity in the blue path? Not very, in my opinion. In some cases, going with the relative value is easier than an absolute one. In other cases, the absolute is king. Neither is better nor worse.
    And, in all cases, that previous example would be much more efficient if it were set up with a variable for the gap, a variable for the shape size, and a function to generate the path definition that’s called from within a loop so it can take in the index to properly calculate the start point.

    Jumping Points: How To Make Compound Paths
    Another very useful thing is something you don’t see visually in the previous CodePen, but it relates to the grid and its code.
    I snuck in a grid drawing update.
    With the method used in earlier examples, using line to draw the grid, the above CodePen would’ve rendered the grid with 14 separate elements. If you go and inspect the final code of that last CodePen, you’ll notice that there is just a single path element within the .grid group.
    It looks like this, which is not fun to look at but holds the secret to how it’s possible:

    <path d="M0 0 H110 M0 10 H110 M0 20 H110 M0 30 H110 M0 0 V45 M10 0 V45 M20 0 V45 M30 0 V45 M40 0 V45 M50 0 V45 M60 0 V45 M70 0 V45 M80 0 V45 M90 0 V45" stroke="currentColor" stroke-width="0.2" fill="none"></path>

    If we take a close look, we may notice that there are multiple M commands. This is the magic of compound paths.
    Since the M/m commands don’t actually draw and just place the cursor, a path can have jumps.

    So, whenever we have multiple paths that share common styling and don’t need to have separate interactions, we can just chain them together to make our code shorter.
    Coming Up Next
    Armed with this knowledge, we’re now able to replace line, polyline, and polygon with path commands and combine them in compound paths. But there is so much more to uncover because path doesn’t just offer foreign-language versions of lines but also gives us the option to code circles and ellipses that have open space and can sometimes also bend, twist, and turn. We’ll refer to those as curves and arcs, and discuss them more explicitly in the next article.
    Further Reading On SmashingMag

    “Mastering SVG Arcs,” Akshay Gupta
    “Accessible SVGs: Perfect Patterns For Screen Reader Users,” Carie Fisher
    “Easy SVG Customization And Animation: A Practical Guide,” Adrian Bece
    “Magical SVG Techniques,” Cosima Mielke
    #decoding #svg #ampltcodeampgtpathampltcodeampgt #element #line
    Decoding The SVG <code>path</code> Element: Line Commands
    In a previous article, we looked at some practical examples of how to code SVG by hand. In that guide, we covered the basics of the SVG elements rect, circle, ellipse, line, polyline, and polygon. This time around, we are going to tackle a more advanced topic, the absolute powerhouse of SVG elements: path. Don’t get me wrong; I still stand by my point that image paths are better drawn in vector programs than coded. But when it comes to technical drawings and data visualizations, the path element unlocks a wide array of possibilities and opens up the world of hand-coded SVGs. The path syntax can be really complex. We’re going to tackle it in two separate parts. In this first installment, we’re learning all about straight and angular paths. In the second part, we’ll make lines bend, twist, and turn. Required Knowledge And Guide Structure Note: If you are unfamiliar with the basics of SVG, such as the subject of viewBox and the basic syntax of the simple elements, I recommend reading my guide before diving into this one. You should also familiarize yourself with <text> if you want to understand each line of code in the examples. Before we get started, I want to quickly recap how I code SVG using JavaScript. I don’t like dealing with numbers and math, and reading SVG Code with numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and write out, I have a much better time with this type of code, and I think you will, too. The goal of this article is more about understanding path syntax than it is about doing placement or how to leverage loops and other more basic things. So, I will not run you through the entire setup of each example. I’ll instead share snippets of the code, but they may be slightly adjusted from the CodePen or simplified to make this article easier to read. However, if there are specific questions about code that are not part of the text in the CodePen demos, the comment section is open. To keep this all framework-agnostic, the code is written in vanilla JavaScript. Setting Up For Success As the path element relies on our understanding of some of the coordinates we plug into the commands, I think it is a lot easier if we have a bit of visual orientation. So, all of the examples will be coded on top of a visual representation of a traditional viewBox setup with the origin in the top-left corner, then moves diagonally down to. The command is: M10 10 L100 100. The blue line is horizontal. It starts atand should end at. We could use the L command, but we’d have to write 55 again. So, instead, we write M10 55 H100, and then SVG knows to look back at the y value of M for the y value of H. It’s the same thing for the green line, but when we use the V command, SVG knows to refer back to the x value of M for the x value of V. If we compare the resulting horizontal path with the same implementation in a <line> element, we may Notice how much more efficient path can be, and Remove quite a bit of meaning for anyone who doesn’t speak path. Because, as we look at these strings, one of them is called “line”. And while the rest doesn’t mean anything out of context, the line definitely conjures a specific image in our heads. <path d="M 10 55 H 100" /> <line x1="10" y1="55" x2="100" y2="55" /> Making Polygons And Polylines With Z In the previous section, we learned how path can behave like <line>, which is pretty cool. But it can do more. It can also act like polyline and polygon. Remember, how those two basically work the same, but polygon connects the first and last point, while polyline does not? The path element can do the same thing. There is a separate command to close the path with a line, which is the Z command. const polyline2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y}; const polygon2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y} Z; So, let’s see this in action and create a repeating triangle shape. Every odd time, it’s open, and every even time, it’s closed. Pretty neat! See the Pen Alternating Trianglesby Myriam. When it comes to comparing path versus polygon and polyline, the other tags tell us about their names, but I would argue that fewer people know what a polygon is versus what a line is. The argument to use these two tags over path for legibility is weak, in my opinion, and I guess you’d probably agree that this looks like equal levels of meaningless string given to an SVG element. <path d="M0 0 L86.6 50 L0 100 Z" /> <polygon points="0,0 86.6,50 0,100" /> <path d="M0 0 L86.6 50 L0 100" /> <polyline points="0,0 86.6,50 0,100" /> Relative Commands: m, l, h, v All of the line commands exist in absolute and relative versions. The difference is that the relative commands are lowercase, e.g., m, l, h, and v. The relative commands are always relative to the last point, so instead of declaring an x value, you’re declaring a dx value, saying this is how many units you’re moving. Before we look at the example visually, I want you to look at the following three-line commands. Try not to look at the CodePen beforehand. const lines =; As I mentioned, I hate looking at numbers without meaning, but there is one number whose meaning is pretty constant in most contexts: 0. Seeing a 0 in combination with a command I just learned means relative manages to instantly tell me that nothing is happening. Seeing l 0 20 by itself tells me that this line only moves along one axis instead of two. And looking at that entire blue path command, the repeated 20 value gives me a sense that the shape might have some regularity to it. The first path does a bit of that by repeating 10 and 30. But the third? As someone who can’t do math in my head, that third string gives me nothing. Now, you might be surprised, but they all draw the same shape, just in different places. See the Pen SVG Compound Pathsby Myriam. So, how valuable is it that we can recognize the regularity in the blue path? Not very, in my opinion. In some cases, going with the relative value is easier than an absolute one. In other cases, the absolute is king. Neither is better nor worse. And, in all cases, that previous example would be much more efficient if it were set up with a variable for the gap, a variable for the shape size, and a function to generate the path definition that’s called from within a loop so it can take in the index to properly calculate the start point. Jumping Points: How To Make Compound Paths Another very useful thing is something you don’t see visually in the previous CodePen, but it relates to the grid and its code. I snuck in a grid drawing update. With the method used in earlier examples, using line to draw the grid, the above CodePen would’ve rendered the grid with 14 separate elements. If you go and inspect the final code of that last CodePen, you’ll notice that there is just a single path element within the .grid group. It looks like this, which is not fun to look at but holds the secret to how it’s possible: <path d="M0 0 H110 M0 10 H110 M0 20 H110 M0 30 H110 M0 0 V45 M10 0 V45 M20 0 V45 M30 0 V45 M40 0 V45 M50 0 V45 M60 0 V45 M70 0 V45 M80 0 V45 M90 0 V45" stroke="currentColor" stroke-width="0.2" fill="none"></path> If we take a close look, we may notice that there are multiple M commands. This is the magic of compound paths. Since the M/m commands don’t actually draw and just place the cursor, a path can have jumps. So, whenever we have multiple paths that share common styling and don’t need to have separate interactions, we can just chain them together to make our code shorter. Coming Up Next Armed with this knowledge, we’re now able to replace line, polyline, and polygon with path commands and combine them in compound paths. But there is so much more to uncover because path doesn’t just offer foreign-language versions of lines but also gives us the option to code circles and ellipses that have open space and can sometimes also bend, twist, and turn. We’ll refer to those as curves and arcs, and discuss them more explicitly in the next article. Further Reading On SmashingMag “Mastering SVG Arcs,” Akshay Gupta “Accessible SVGs: Perfect Patterns For Screen Reader Users,” Carie Fisher “Easy SVG Customization And Animation: A Practical Guide,” Adrian Bece “Magical SVG Techniques,” Cosima Mielke #decoding #svg #ampltcodeampgtpathampltcodeampgt #element #line
    SMASHINGMAGAZINE.COM
    Decoding The SVG <code>path</code> Element: Line Commands
    In a previous article, we looked at some practical examples of how to code SVG by hand. In that guide, we covered the basics of the SVG elements rect, circle, ellipse, line, polyline, and polygon (and also g). This time around, we are going to tackle a more advanced topic, the absolute powerhouse of SVG elements: path. Don’t get me wrong; I still stand by my point that image paths are better drawn in vector programs than coded (unless you’re the type of creative who makes non-logical visual art in code — then go forth and create awe-inspiring wonders; you’re probably not the audience of this article). But when it comes to technical drawings and data visualizations, the path element unlocks a wide array of possibilities and opens up the world of hand-coded SVGs. The path syntax can be really complex. We’re going to tackle it in two separate parts. In this first installment, we’re learning all about straight and angular paths. In the second part, we’ll make lines bend, twist, and turn. Required Knowledge And Guide Structure Note: If you are unfamiliar with the basics of SVG, such as the subject of viewBox and the basic syntax of the simple elements (rect, line, g, and so on), I recommend reading my guide before diving into this one. You should also familiarize yourself with <text> if you want to understand each line of code in the examples. Before we get started, I want to quickly recap how I code SVG using JavaScript. I don’t like dealing with numbers and math, and reading SVG Code with numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and write out, I have a much better time with this type of code, and I think you will, too. The goal of this article is more about understanding path syntax than it is about doing placement or how to leverage loops and other more basic things. So, I will not run you through the entire setup of each example. I’ll instead share snippets of the code, but they may be slightly adjusted from the CodePen or simplified to make this article easier to read. However, if there are specific questions about code that are not part of the text in the CodePen demos, the comment section is open. To keep this all framework-agnostic, the code is written in vanilla JavaScript (though, really, TypeScript is your friend the more complicated your SVG becomes, and I missed it when writing some of these). Setting Up For Success As the path element relies on our understanding of some of the coordinates we plug into the commands, I think it is a lot easier if we have a bit of visual orientation. So, all of the examples will be coded on top of a visual representation of a traditional viewBox setup with the origin in the top-left corner (so, values in the shape of 0 0 ${width} ${height}. I added text labels as well to make it easier to point you to specific areas within the grid. Please note that I recommend being careful when adding text within the <text> element in SVG if you want your text to be accessible. If the graphic relies on text scaling like the rest of your website, it would be better to have it rendered through HTML. But for our examples here, it should be sufficient. So, this is what we’ll be plotting on top of: See the Pen SVG Viewbox Grid Visual [forked] by Myriam. Alright, we now have a ViewBox Visualizing Grid. I think we’re ready for our first session with the beast. Enter path And The All-Powerful d Attribute The <path> element has a d attribute, which speaks its own language. So, within d, you’re talking in terms of “commands”. When I think of non-path versus path elements, I like to think that the reason why we have to write much more complex drawing instructions is this: All non-path elements are just dumber paths. In the background, they have one pre-drawn path shape that they will always render based on a few parameters you pass in. But path has no default shape. The shape logic has to be exposed to you, while it can be neatly hidden away for all other elements. Let’s learn about those commands. Where It All Begins: M The first, which is where each path begins, is the M command, which moves the pen to a point. This command places your starting point, but it does not draw a single thing. A path with just an M command is an auto-delete when cleaning up SVG files. It takes two arguments: the x and y coordinates of your start position. const uselessPathCommand = `M${start.x} ${start.y}`; Basic Line Commands: M , L, H, V These are fun and easy: L, H, and V, all draw a line from the current point to the point specified. L takes two arguments, the x and y positions of the point you want to draw to. const pathCommandL = `M${start.x} ${start.y} L${end.x} ${end.y}`; H and V, on the other hand, only take one argument because they are only drawing a line in one direction. For H, you specify the x position, and for V, you specify the y position. The other value is implied. const pathCommandH = `M${start.x} ${start.y} H${end.x}`; const pathCommandV = `M${start.x} ${start.y} V${end.y}`; To visualize how this works, I created a function that draws the path, as well as points with labels on them, so we can see what happens. See the Pen Simple Lines with path [forked] by Myriam. We have three lines in that image. The L command is used for the red path. It starts with M at (10,10), then moves diagonally down to (100,100). The command is: M10 10 L100 100. The blue line is horizontal. It starts at (10,55) and should end at (100, 55). We could use the L command, but we’d have to write 55 again. So, instead, we write M10 55 H100, and then SVG knows to look back at the y value of M for the y value of H. It’s the same thing for the green line, but when we use the V command, SVG knows to refer back to the x value of M for the x value of V. If we compare the resulting horizontal path with the same implementation in a <line> element, we may Notice how much more efficient path can be, and Remove quite a bit of meaning for anyone who doesn’t speak path. Because, as we look at these strings, one of them is called “line”. And while the rest doesn’t mean anything out of context, the line definitely conjures a specific image in our heads. <path d="M 10 55 H 100" /> <line x1="10" y1="55" x2="100" y2="55" /> Making Polygons And Polylines With Z In the previous section, we learned how path can behave like <line>, which is pretty cool. But it can do more. It can also act like polyline and polygon. Remember, how those two basically work the same, but polygon connects the first and last point, while polyline does not? The path element can do the same thing. There is a separate command to close the path with a line, which is the Z command. const polyline2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y}; const polygon2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y} Z; So, let’s see this in action and create a repeating triangle shape. Every odd time, it’s open, and every even time, it’s closed. Pretty neat! See the Pen Alternating Triangles [forked] by Myriam. When it comes to comparing path versus polygon and polyline, the other tags tell us about their names, but I would argue that fewer people know what a polygon is versus what a line is (and probably even fewer know what a polyline is. Heck, even the program I’m writing this article in tells me polyline is not a valid word). The argument to use these two tags over path for legibility is weak, in my opinion, and I guess you’d probably agree that this looks like equal levels of meaningless string given to an SVG element. <path d="M0 0 L86.6 50 L0 100 Z" /> <polygon points="0,0 86.6,50 0,100" /> <path d="M0 0 L86.6 50 L0 100" /> <polyline points="0,0 86.6,50 0,100" /> Relative Commands: m, l, h, v All of the line commands exist in absolute and relative versions. The difference is that the relative commands are lowercase, e.g., m, l, h, and v. The relative commands are always relative to the last point, so instead of declaring an x value, you’re declaring a dx value, saying this is how many units you’re moving. Before we look at the example visually, I want you to look at the following three-line commands. Try not to look at the CodePen beforehand. const lines = [ { d: `M10 10 L 10 30 L 30 30`, color: "var(--_red)" }, { d: `M40 10 l 0 20 l 20 0`, color: "var(--_blue)" }, { d: `M70 10 l 0 20 L 90 30`, color: "var(--_green)" } ]; As I mentioned, I hate looking at numbers without meaning, but there is one number whose meaning is pretty constant in most contexts: 0. Seeing a 0 in combination with a command I just learned means relative manages to instantly tell me that nothing is happening. Seeing l 0 20 by itself tells me that this line only moves along one axis instead of two. And looking at that entire blue path command, the repeated 20 value gives me a sense that the shape might have some regularity to it. The first path does a bit of that by repeating 10 and 30. But the third? As someone who can’t do math in my head, that third string gives me nothing. Now, you might be surprised, but they all draw the same shape, just in different places. See the Pen SVG Compound Paths [forked] by Myriam. So, how valuable is it that we can recognize the regularity in the blue path? Not very, in my opinion. In some cases, going with the relative value is easier than an absolute one. In other cases, the absolute is king. Neither is better nor worse. And, in all cases, that previous example would be much more efficient if it were set up with a variable for the gap, a variable for the shape size, and a function to generate the path definition that’s called from within a loop so it can take in the index to properly calculate the start point. Jumping Points: How To Make Compound Paths Another very useful thing is something you don’t see visually in the previous CodePen, but it relates to the grid and its code. I snuck in a grid drawing update. With the method used in earlier examples, using line to draw the grid, the above CodePen would’ve rendered the grid with 14 separate elements. If you go and inspect the final code of that last CodePen, you’ll notice that there is just a single path element within the .grid group. It looks like this, which is not fun to look at but holds the secret to how it’s possible: <path d="M0 0 H110 M0 10 H110 M0 20 H110 M0 30 H110 M0 0 V45 M10 0 V45 M20 0 V45 M30 0 V45 M40 0 V45 M50 0 V45 M60 0 V45 M70 0 V45 M80 0 V45 M90 0 V45" stroke="currentColor" stroke-width="0.2" fill="none"></path> If we take a close look, we may notice that there are multiple M commands. This is the magic of compound paths. Since the M/m commands don’t actually draw and just place the cursor, a path can have jumps. So, whenever we have multiple paths that share common styling and don’t need to have separate interactions, we can just chain them together to make our code shorter. Coming Up Next Armed with this knowledge, we’re now able to replace line, polyline, and polygon with path commands and combine them in compound paths. But there is so much more to uncover because path doesn’t just offer foreign-language versions of lines but also gives us the option to code circles and ellipses that have open space and can sometimes also bend, twist, and turn. We’ll refer to those as curves and arcs, and discuss them more explicitly in the next article. Further Reading On SmashingMag “Mastering SVG Arcs,” Akshay Gupta “Accessible SVGs: Perfect Patterns For Screen Reader Users,” Carie Fisher “Easy SVG Customization And Animation: A Practical Guide,” Adrian Bece “Magical SVG Techniques,” Cosima Mielke
    0 Comments 0 Shares
  • Turn RTX ON With 40% Off Performance Day Passes

    Level up GeForce NOW experiences this summer with 40% off Performance Day Passes. Enjoy 24 hours of premium cloud gaming with RTX ON, delivering low latency and shorter wait times.
    The hot deal comes just in time for the cloud’s highly anticipated launch of Dune: Awakening — a multiplayer survival game on a massive scale set on the unforgiving sands of Arrakis.
    It’s perfect to pair with the nine games available this week, including the Frosthaven demo announced at Steam Next Fest.
    Try Before You Buy
    One day, all in.
    Level up to the cloud, no commitment required. For a limited time, grab a Performance Day Pass at a price that’s less than an ice cream sundae and experience premium GeForce NOW gaming for 24 hours.
    With RTX ON, enjoy shorter wait times and lower latency for supported games, all powered by the cloud. Dive into popular games with upgraded visuals and smoother gameplay over free users, whether exploring vast open worlds or battling in fast-paced arenas.
    Take the experience even further by applying the value of the Day Pass toward a six-month Performance membership during the limited-time summer sale. It’s the perfect way to try out premium cloud gaming before jumping into a longer-term membership.
    Survive and Thrive
    Join the fight for Arrakis.
    Dune: Awakening, a multiplayer survival game on a massive scale from Funcom, is set on an ever-changing desert planet called Arrakis. Whether braving colossal sandworms, battling for spice or forging alliances, gamers can experience the spectacle of Arrakis with all the benefits of GeForce NOW.
    Manage hydration, temperature and exposure while contending with deadly sandworms, sandstorms and rival factions. Blend skills-based third-person action combat — featuring ranged and melee weapons, gadgets and abilities — with deep crafting, base building and resource management. Explore and engage in large-scale player vs. player and player vs. environment battles while vying for control over territory and the precious spice.
    The spice is flowing — and so is the power of the cloud. Stream it on GeForce NOW without waiting for lengthy downloads or worrying about hardware requirements. Dune: Awakening is available for members to stream from anywhere with the power of NVIDIA RTX for ultra-smooth gameplay and stunning visuals, even on low-powered devices.
    Chill Out
    Time to bundle up.
    Experience the highly anticipated Frosthaven demo in the cloud during Steam Next Fest with GeForce NOW. For a limited time, dive into a preview of the game directly from the cloud — no high-end PC required.
    Frosthaven — a dark fantasy tactical role-playing game from Snapshot Games and X-COM creator Julian Gollop — brings to life the board game of the same name. It features deep, turn-based combat, unique character classes, and single-player and online co-op modes.
    Play the Frosthaven demo on virtually any device with GeForce NOW and experience the magic of gathering around a board game — now in the cloud. Enter the frozen north of Frosthaven, strategize with friends and dive into epic battles without the hassle of setup or cleanup. With GeForce NOW, game night is just a click away, wherever members are playing from.
    Seize New Games
    A new era of “Rainbow Six Siege” has begun.
    Rainbow Six Siege X, the biggest evolution in the game’s history, is now available with free access for new players. It introduces a new 6v6 “Dual Front” game mode, where teams attack and defend simultaneously with respawns and new strategic objectives. R6 Siege X also brings new and improved gameplay features — such as modernized maps with enhanced visuals and lighting, new destructible environmental elements, advanced rappel, smoother movement, an audio overhaul and a communication wheel for precise strategic plays, as well as weapon inspections to showcase gamers’ favorite cosmetics.
    Look for the following games available to stream in the cloud this week:

    Frosthaven DemoDune: AwakeningMindsEyeKingdom Two CrownsThe AltersLost in Random: The Eternal DieFirefighting Simulator – The SquadJDM: Japanese Drift MasterHellslaveWhat are you planning to play this weekend? Let us know on X or in the comments below.
    #turn #rtx #with #off #performance
    Turn RTX ON With 40% Off Performance Day Passes
    Level up GeForce NOW experiences this summer with 40% off Performance Day Passes. Enjoy 24 hours of premium cloud gaming with RTX ON, delivering low latency and shorter wait times. The hot deal comes just in time for the cloud’s highly anticipated launch of Dune: Awakening — a multiplayer survival game on a massive scale set on the unforgiving sands of Arrakis. It’s perfect to pair with the nine games available this week, including the Frosthaven demo announced at Steam Next Fest. Try Before You Buy One day, all in. Level up to the cloud, no commitment required. For a limited time, grab a Performance Day Pass at a price that’s less than an ice cream sundae and experience premium GeForce NOW gaming for 24 hours. With RTX ON, enjoy shorter wait times and lower latency for supported games, all powered by the cloud. Dive into popular games with upgraded visuals and smoother gameplay over free users, whether exploring vast open worlds or battling in fast-paced arenas. Take the experience even further by applying the value of the Day Pass toward a six-month Performance membership during the limited-time summer sale. It’s the perfect way to try out premium cloud gaming before jumping into a longer-term membership. Survive and Thrive Join the fight for Arrakis. Dune: Awakening, a multiplayer survival game on a massive scale from Funcom, is set on an ever-changing desert planet called Arrakis. Whether braving colossal sandworms, battling for spice or forging alliances, gamers can experience the spectacle of Arrakis with all the benefits of GeForce NOW. Manage hydration, temperature and exposure while contending with deadly sandworms, sandstorms and rival factions. Blend skills-based third-person action combat — featuring ranged and melee weapons, gadgets and abilities — with deep crafting, base building and resource management. Explore and engage in large-scale player vs. player and player vs. environment battles while vying for control over territory and the precious spice. The spice is flowing — and so is the power of the cloud. Stream it on GeForce NOW without waiting for lengthy downloads or worrying about hardware requirements. Dune: Awakening is available for members to stream from anywhere with the power of NVIDIA RTX for ultra-smooth gameplay and stunning visuals, even on low-powered devices. Chill Out Time to bundle up. Experience the highly anticipated Frosthaven demo in the cloud during Steam Next Fest with GeForce NOW. For a limited time, dive into a preview of the game directly from the cloud — no high-end PC required. Frosthaven — a dark fantasy tactical role-playing game from Snapshot Games and X-COM creator Julian Gollop — brings to life the board game of the same name. It features deep, turn-based combat, unique character classes, and single-player and online co-op modes. Play the Frosthaven demo on virtually any device with GeForce NOW and experience the magic of gathering around a board game — now in the cloud. Enter the frozen north of Frosthaven, strategize with friends and dive into epic battles without the hassle of setup or cleanup. With GeForce NOW, game night is just a click away, wherever members are playing from. Seize New Games A new era of “Rainbow Six Siege” has begun. Rainbow Six Siege X, the biggest evolution in the game’s history, is now available with free access for new players. It introduces a new 6v6 “Dual Front” game mode, where teams attack and defend simultaneously with respawns and new strategic objectives. R6 Siege X also brings new and improved gameplay features — such as modernized maps with enhanced visuals and lighting, new destructible environmental elements, advanced rappel, smoother movement, an audio overhaul and a communication wheel for precise strategic plays, as well as weapon inspections to showcase gamers’ favorite cosmetics. Look for the following games available to stream in the cloud this week: Frosthaven DemoDune: AwakeningMindsEyeKingdom Two CrownsThe AltersLost in Random: The Eternal DieFirefighting Simulator – The SquadJDM: Japanese Drift MasterHellslaveWhat are you planning to play this weekend? Let us know on X or in the comments below. #turn #rtx #with #off #performance
    BLOGS.NVIDIA.COM
    Turn RTX ON With 40% Off Performance Day Passes
    Level up GeForce NOW experiences this summer with 40% off Performance Day Passes. Enjoy 24 hours of premium cloud gaming with RTX ON, delivering low latency and shorter wait times. The hot deal comes just in time for the cloud’s highly anticipated launch of Dune: Awakening — a multiplayer survival game on a massive scale set on the unforgiving sands of Arrakis. It’s perfect to pair with the nine games available this week, including the Frosthaven demo announced at Steam Next Fest. Try Before You Buy One day, all in. Level up to the cloud, no commitment required. For a limited time, grab a Performance Day Pass at a price that’s less than an ice cream sundae and experience premium GeForce NOW gaming for 24 hours. With RTX ON, enjoy shorter wait times and lower latency for supported games, all powered by the cloud. Dive into popular games with upgraded visuals and smoother gameplay over free users, whether exploring vast open worlds or battling in fast-paced arenas. Take the experience even further by applying the value of the Day Pass toward a six-month Performance membership during the limited-time summer sale. It’s the perfect way to try out premium cloud gaming before jumping into a longer-term membership. Survive and Thrive Join the fight for Arrakis. Dune: Awakening, a multiplayer survival game on a massive scale from Funcom, is set on an ever-changing desert planet called Arrakis. Whether braving colossal sandworms, battling for spice or forging alliances, gamers can experience the spectacle of Arrakis with all the benefits of GeForce NOW. Manage hydration, temperature and exposure while contending with deadly sandworms, sandstorms and rival factions. Blend skills-based third-person action combat — featuring ranged and melee weapons, gadgets and abilities — with deep crafting, base building and resource management. Explore and engage in large-scale player vs. player and player vs. environment battles while vying for control over territory and the precious spice. The spice is flowing — and so is the power of the cloud. Stream it on GeForce NOW without waiting for lengthy downloads or worrying about hardware requirements. Dune: Awakening is available for members to stream from anywhere with the power of NVIDIA RTX for ultra-smooth gameplay and stunning visuals, even on low-powered devices. Chill Out Time to bundle up. Experience the highly anticipated Frosthaven demo in the cloud during Steam Next Fest with GeForce NOW. For a limited time, dive into a preview of the game directly from the cloud — no high-end PC required. Frosthaven — a dark fantasy tactical role-playing game from Snapshot Games and X-COM creator Julian Gollop — brings to life the board game of the same name. It features deep, turn-based combat, unique character classes, and single-player and online co-op modes. Play the Frosthaven demo on virtually any device with GeForce NOW and experience the magic of gathering around a board game — now in the cloud. Enter the frozen north of Frosthaven, strategize with friends and dive into epic battles without the hassle of setup or cleanup. With GeForce NOW, game night is just a click away, wherever members are playing from. Seize New Games A new era of “Rainbow Six Siege” has begun. Rainbow Six Siege X, the biggest evolution in the game’s history, is now available with free access for new players. It introduces a new 6v6 “Dual Front” game mode, where teams attack and defend simultaneously with respawns and new strategic objectives. R6 Siege X also brings new and improved gameplay features — such as modernized maps with enhanced visuals and lighting, new destructible environmental elements, advanced rappel, smoother movement, an audio overhaul and a communication wheel for precise strategic plays, as well as weapon inspections to showcase gamers’ favorite cosmetics. Look for the following games available to stream in the cloud this week: Frosthaven Demo (New release on Steam, June 9) Dune: Awakening (New release on Steam, June 10) MindsEye (New release on Steam, June 10) Kingdom Two Crowns (New release on Xbox, available on PC Game Pass, June 11) The Alters (New release on Steam and Xbox, available on PC Game Pass, June 13) Lost in Random: The Eternal Die (New release on Steam and Xbox, June 13, available on PC Game Pass, June 17) Firefighting Simulator – The Squad (Xbox, available on PC Game Pass) JDM: Japanese Drift Master (Steam) Hellslave (Steam) What are you planning to play this weekend? Let us know on X or in the comments below.
    0 Comments 0 Shares