• Hitman: IO Interactive Has Big Plans For World of Assassination

    While IO Interactive may be heavily focused on its inaugural James Bond game, 2026’s 007 First Light, it’s still providing ambitious new levels and updates for Hitman: World of Assassination and its new science fiction action game MindsEye. To continue to build hype for First Light and IOI’s growing partnership with the James Bond brand, the latest World of Assassination level is a Bond crossover, as Hitman protagonist Agent 47 targets Le Chiffre, the main villain of the 2006 movie Casino Royale. Available through July 6, 2025, the Le Chiffre event in World of Assassination features actor Mads Mikkelsen reprising his fan-favorite Bond villain role, not only providing his likeness but voicing the character as he confronts the contract killer in France.
    Den of Geek attended the first-ever in-person IO Interactive Showcase, a partner event with Summer Game Fest held at The Roosevelt Hotel in Hollywood. Mikkelsen and the developers shared insight on the surprise new World of Assassination level, with the level itself playable in its entirety to attendees on the Nintendo Switch 2 and PlayStation Portal. The developers also included an extended gameplay preview for MindsEye, ahead of its June 10 launch, while sharing some details about the techno-thriller.

    Matching his background from Casino Royale, Le Chiffre is a terrorist financier who manipulates the stock market by any means necessary to benefit himself and his clients. After an investment deal goes wrong, Le Chiffre tries to recoup a brutal client’s losses through a high-stakes poker game in France, with Agent 47 hired to assassinate the criminal mastermind on behalf of an unidentified backer. The level opens with 47 infiltrating a high society gala linked to the poker game, with the contract killer entering under his oft-used assumed name of Tobias Rieper, a facade that Le Chiffre immediately sees through.
    At the IO Interactive Showcase panel, Mikkelsen observed that the character of Le Chiffre is always one that he enjoyed and held a special place for him and his career. Reprising his villainous role also gave Mikkelsen the chance to reunite with longtime Agent 47 voice actor David Bateson since their ‘90s short film Tom Merritt, though both actors recorded their respective lines separately. Mikkelsen enjoyed that Le Chiffre’s appearance in World of Assassination gave him a more physical role than he had in Casino Royale, rather than largely placing him at a poker table.

    Of course, like most Hitman levels, there are multiple different ways that players can accomplish their main objective of killing Le Chiffre and escaping the premises. The game certainly gives players multiple avenues to confront the evil financier over a game of poker before closing in for the kill, but it’s by no means the only way to successfully assassinate him. We won’t give away how we ultimately pulled off the assassination, but rest assured that it took multiple tries, careful plotting, and with all the usual trial-and-error that comes from playing one of Hitman’s more difficult and immersively involved levels.
    Moving away from its more grounded action titles, IO Interactive also provided a deeper look at its new sci-fi game MindsEye, developed by Build a Rocket Boy. Set in the fictional Redrock City, the extended gameplay sneak peek at the showcase featured protagonist Adam Diaz fighting shadowy enemies in the futuristic city’s largely abandoned streets. While there were no hands-on demos at the showcase itself, the preview demonstrated Diaz using his abilities and equipment, including an accompanying drone, to navigate the city from a third-person perspective and use an array of weapons to dispatch those trying to hunt him down.
    MindsEye marks the first game published through IOI Partners, an initiative that has IOI publish games from smaller, external developers. The game did not have a hands-on demo at the showcase and, given its bug-heavy and poorly-received launch, this distinction is not particularly surprising. Build a Robot Boy has since pledged to support the game through June to fix its technical issues but, given the game’s hands-on access at the IOI Showcase, there were already red flags surrounding the game’s performance. With that in mind, most of the buzz at the showcase was unsurprisingly centered around 007 First Light and updates to Hitman: World of Assassination, and IO Interactive did not disappoint in that regard.
    Even with Hitman: World of Assassination over four years old now, the game continues to receive impressive post-release support from IO Interactive, both in bringing the title to the Nintendo Switch 2 and with additional DLC. At the showcase, IOI hinted at additional special levels for World of Assassintation with high-profile guest targets like Le Chiffre, without identifying who or if they’re also explicitly tied to the James Bond franchise. But with 007 First Light slated for its eagerly anticipated launch next year, it’s a safe bet that IOI has further plans to hype its own role in building out the James Bond legacy for the foreseeable future.
    The Hitman: World of Assassination special Le Chiffre level is available now through July 6, 2025 on all the game’s major platforms, including the Nintendo Switch 2.
    MindsEye is now on sale for PlayStation 5, Xbox Series X|S, and PC.
    #hitman #interactive #has #big #plans
    Hitman: IO Interactive Has Big Plans For World of Assassination
    While IO Interactive may be heavily focused on its inaugural James Bond game, 2026’s 007 First Light, it’s still providing ambitious new levels and updates for Hitman: World of Assassination and its new science fiction action game MindsEye. To continue to build hype for First Light and IOI’s growing partnership with the James Bond brand, the latest World of Assassination level is a Bond crossover, as Hitman protagonist Agent 47 targets Le Chiffre, the main villain of the 2006 movie Casino Royale. Available through July 6, 2025, the Le Chiffre event in World of Assassination features actor Mads Mikkelsen reprising his fan-favorite Bond villain role, not only providing his likeness but voicing the character as he confronts the contract killer in France. Den of Geek attended the first-ever in-person IO Interactive Showcase, a partner event with Summer Game Fest held at The Roosevelt Hotel in Hollywood. Mikkelsen and the developers shared insight on the surprise new World of Assassination level, with the level itself playable in its entirety to attendees on the Nintendo Switch 2 and PlayStation Portal. The developers also included an extended gameplay preview for MindsEye, ahead of its June 10 launch, while sharing some details about the techno-thriller. Matching his background from Casino Royale, Le Chiffre is a terrorist financier who manipulates the stock market by any means necessary to benefit himself and his clients. After an investment deal goes wrong, Le Chiffre tries to recoup a brutal client’s losses through a high-stakes poker game in France, with Agent 47 hired to assassinate the criminal mastermind on behalf of an unidentified backer. The level opens with 47 infiltrating a high society gala linked to the poker game, with the contract killer entering under his oft-used assumed name of Tobias Rieper, a facade that Le Chiffre immediately sees through. At the IO Interactive Showcase panel, Mikkelsen observed that the character of Le Chiffre is always one that he enjoyed and held a special place for him and his career. Reprising his villainous role also gave Mikkelsen the chance to reunite with longtime Agent 47 voice actor David Bateson since their ‘90s short film Tom Merritt, though both actors recorded their respective lines separately. Mikkelsen enjoyed that Le Chiffre’s appearance in World of Assassination gave him a more physical role than he had in Casino Royale, rather than largely placing him at a poker table. Of course, like most Hitman levels, there are multiple different ways that players can accomplish their main objective of killing Le Chiffre and escaping the premises. The game certainly gives players multiple avenues to confront the evil financier over a game of poker before closing in for the kill, but it’s by no means the only way to successfully assassinate him. We won’t give away how we ultimately pulled off the assassination, but rest assured that it took multiple tries, careful plotting, and with all the usual trial-and-error that comes from playing one of Hitman’s more difficult and immersively involved levels. Moving away from its more grounded action titles, IO Interactive also provided a deeper look at its new sci-fi game MindsEye, developed by Build a Rocket Boy. Set in the fictional Redrock City, the extended gameplay sneak peek at the showcase featured protagonist Adam Diaz fighting shadowy enemies in the futuristic city’s largely abandoned streets. While there were no hands-on demos at the showcase itself, the preview demonstrated Diaz using his abilities and equipment, including an accompanying drone, to navigate the city from a third-person perspective and use an array of weapons to dispatch those trying to hunt him down. MindsEye marks the first game published through IOI Partners, an initiative that has IOI publish games from smaller, external developers. The game did not have a hands-on demo at the showcase and, given its bug-heavy and poorly-received launch, this distinction is not particularly surprising. Build a Robot Boy has since pledged to support the game through June to fix its technical issues but, given the game’s hands-on access at the IOI Showcase, there were already red flags surrounding the game’s performance. With that in mind, most of the buzz at the showcase was unsurprisingly centered around 007 First Light and updates to Hitman: World of Assassination, and IO Interactive did not disappoint in that regard. Even with Hitman: World of Assassination over four years old now, the game continues to receive impressive post-release support from IO Interactive, both in bringing the title to the Nintendo Switch 2 and with additional DLC. At the showcase, IOI hinted at additional special levels for World of Assassintation with high-profile guest targets like Le Chiffre, without identifying who or if they’re also explicitly tied to the James Bond franchise. But with 007 First Light slated for its eagerly anticipated launch next year, it’s a safe bet that IOI has further plans to hype its own role in building out the James Bond legacy for the foreseeable future. The Hitman: World of Assassination special Le Chiffre level is available now through July 6, 2025 on all the game’s major platforms, including the Nintendo Switch 2. MindsEye is now on sale for PlayStation 5, Xbox Series X|S, and PC. #hitman #interactive #has #big #plans
    WWW.DENOFGEEK.COM
    Hitman: IO Interactive Has Big Plans For World of Assassination
    While IO Interactive may be heavily focused on its inaugural James Bond game, 2026’s 007 First Light, it’s still providing ambitious new levels and updates for Hitman: World of Assassination and its new science fiction action game MindsEye. To continue to build hype for First Light and IOI’s growing partnership with the James Bond brand, the latest World of Assassination level is a Bond crossover, as Hitman protagonist Agent 47 targets Le Chiffre, the main villain of the 2006 movie Casino Royale. Available through July 6, 2025, the Le Chiffre event in World of Assassination features actor Mads Mikkelsen reprising his fan-favorite Bond villain role, not only providing his likeness but voicing the character as he confronts the contract killer in France. Den of Geek attended the first-ever in-person IO Interactive Showcase, a partner event with Summer Game Fest held at The Roosevelt Hotel in Hollywood. Mikkelsen and the developers shared insight on the surprise new World of Assassination level, with the level itself playable in its entirety to attendees on the Nintendo Switch 2 and PlayStation Portal. The developers also included an extended gameplay preview for MindsEye, ahead of its June 10 launch, while sharing some details about the techno-thriller. Matching his background from Casino Royale, Le Chiffre is a terrorist financier who manipulates the stock market by any means necessary to benefit himself and his clients. After an investment deal goes wrong, Le Chiffre tries to recoup a brutal client’s losses through a high-stakes poker game in France, with Agent 47 hired to assassinate the criminal mastermind on behalf of an unidentified backer. The level opens with 47 infiltrating a high society gala linked to the poker game, with the contract killer entering under his oft-used assumed name of Tobias Rieper, a facade that Le Chiffre immediately sees through. At the IO Interactive Showcase panel, Mikkelsen observed that the character of Le Chiffre is always one that he enjoyed and held a special place for him and his career. Reprising his villainous role also gave Mikkelsen the chance to reunite with longtime Agent 47 voice actor David Bateson since their ‘90s short film Tom Merritt, though both actors recorded their respective lines separately. Mikkelsen enjoyed that Le Chiffre’s appearance in World of Assassination gave him a more physical role than he had in Casino Royale, rather than largely placing him at a poker table. Of course, like most Hitman levels, there are multiple different ways that players can accomplish their main objective of killing Le Chiffre and escaping the premises. The game certainly gives players multiple avenues to confront the evil financier over a game of poker before closing in for the kill, but it’s by no means the only way to successfully assassinate him. We won’t give away how we ultimately pulled off the assassination, but rest assured that it took multiple tries, careful plotting, and with all the usual trial-and-error that comes from playing one of Hitman’s more difficult and immersively involved levels. Moving away from its more grounded action titles, IO Interactive also provided a deeper look at its new sci-fi game MindsEye, developed by Build a Rocket Boy. Set in the fictional Redrock City, the extended gameplay sneak peek at the showcase featured protagonist Adam Diaz fighting shadowy enemies in the futuristic city’s largely abandoned streets. While there were no hands-on demos at the showcase itself, the preview demonstrated Diaz using his abilities and equipment, including an accompanying drone, to navigate the city from a third-person perspective and use an array of weapons to dispatch those trying to hunt him down. MindsEye marks the first game published through IOI Partners, an initiative that has IOI publish games from smaller, external developers. The game did not have a hands-on demo at the showcase and, given its bug-heavy and poorly-received launch, this distinction is not particularly surprising. Build a Robot Boy has since pledged to support the game through June to fix its technical issues but, given the game’s hands-on access at the IOI Showcase, there were already red flags surrounding the game’s performance. With that in mind, most of the buzz at the showcase was unsurprisingly centered around 007 First Light and updates to Hitman: World of Assassination, and IO Interactive did not disappoint in that regard. Even with Hitman: World of Assassination over four years old now, the game continues to receive impressive post-release support from IO Interactive, both in bringing the title to the Nintendo Switch 2 and with additional DLC. At the showcase, IOI hinted at additional special levels for World of Assassintation with high-profile guest targets like Le Chiffre, without identifying who or if they’re also explicitly tied to the James Bond franchise. But with 007 First Light slated for its eagerly anticipated launch next year, it’s a safe bet that IOI has further plans to hype its own role in building out the James Bond legacy for the foreseeable future. The Hitman: World of Assassination special Le Chiffre level is available now through July 6, 2025 on all the game’s major platforms, including the Nintendo Switch 2. MindsEye is now on sale for PlayStation 5, Xbox Series X|S, and PC.
    Like
    Love
    Wow
    Angry
    Sad
    498
    0 Commenti 0 condivisioni
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Commenti 0 condivisioni
  • Trump’s military parade is a warning

    Donald Trump’s military parade in Washington this weekend — a show of force in the capital that just happens to take place on the president’s birthday — smacks of authoritarian Dear Leader-style politics.Yet as disconcerting as the imagery of tanks rolling down Constitution Avenue will be, it’s not even close to Trump’s most insidious assault on the US military’s historic and democratically essential nonpartisan ethos.In fact, it’s not even the most worrying thing he’s done this week.On Tuesday, the president gave a speech at Fort Bragg, an Army base home to Special Operations Command. While presidential speeches to soldiers are not uncommon — rows of uniformed troops make a great backdrop for a foreign policy speech — they generally avoid overt partisan attacks and campaign-style rhetoric. The soldiers, for their part, are expected to be studiously neutral, laughing at jokes and such, but remaining fully impassive during any policy conversation.That’s not what happened at Fort Bragg. Trump’s speech was a partisan tirade that targeted “radical left” opponents ranging from Joe Biden to Los Angeles Mayor Karen Bass. He celebrated his deployment of Marines to Los Angeles, proposed jailing people for burning the American flag, and called on soldiers to be “aggressive” toward the protesters they encountered.The soldiers, for their part, cheered Trump and booed his enemies — as they were seemingly expected to. Reporters at Military.com, a military news service, uncovered internal communications from 82nd Airborne leadership suggesting that the crowd was screened for their political opinions.“If soldiers have political views that are in opposition to the current administration and they don’t want to be in the audience then they need to speak with their leadership and get swapped out,” one note read.To call this unusual is an understatement. I spoke with four different experts on civil-military relations, two of whom teach at the Naval War College, about the speech and its implications. To a person, they said it was a step towards politicizing the military with no real precedent in modern American history.“That is, I think, a really big red flag because it means the military’s professional ethic is breaking down internally,” says Risa Brooks, a professor at Marquette University. “Its capacity to maintain that firewall against civilian politicization may be faltering.”This may sound alarmist — like an overreading of a one-off incident — but it’s part of a bigger pattern. The totality of Trump administration policies, ranging from the parade in Washington to the LA troop deployment to Secretary of Defense Pete Hegseth’s firing of high-ranking women and officers of color, suggests a concerted effort to erode the military’s professional ethos and turn it into an institution subservient to the Trump administration’s whims. This is a signal policy aim of would-be dictators, who wish to head off the risk of a coup and ensure the armed forces’ political reliability if they are needed to repress dissent in a crisis.Steve Saideman, a professor at Carleton University, put together a list of eight different signs that a military is being politicized in this fashion. The Trump administration has exhibited six out of the eight.“The biggest theme is that we are seeing a number of checks on the executive fail at the same time — and that’s what’s making individual events seem more alarming than they might otherwise,” says Jessica Blankshain, a professor at the Naval War College.That Trump is trying to politicize the military does not mean he has succeeded. There are several signs, including Trump’s handpicked chair of the Joint Chiefs repudiating the president’s claims of a migrant invasion during congressional testimony, that the US military is resisting Trump’s politicization.But the events in Fort Bragg and Washington suggest that we are in the midst of a quiet crisis in civil-military relations in the United States — one whose implications for American democracy’s future could well be profound.The Trump crisis in civil-military relations, explainedA military is, by sheer fact of its existence, a threat to any civilian government. If you have an institution that controls the overwhelming bulk of weaponry in a society, it always has the physical capacity to seize control of the government at gunpoint. A key question for any government is how to convince the armed forces that they cannot or should not take power for themselves.Democracies typically do this through a process called “professionalization.” Soldiers are rigorously taught to think of themselves as a class of public servants, people trained to perform a specific job within defined parameters. Their ultimate loyalty is not to their generals or even individual presidents, but rather to the people and the constitutional order.Samuel Huntington, the late Harvard political scientist, is the canonical theorist of a professional military. In his book The Soldier and the State, he described optimal professionalization as a system of “objective control”: one in which the military retains autonomy in how they fight and plan for wars while deferring to politicians on whether and why to fight in the first place. In effect, they stay out of the politicians’ affairs while the politicians stay out of theirs.The idea of such a system is to emphasize to the military that they are professionals: Their responsibility isn’t deciding when to use force, but only to conduct operations as effectively as possible once ordered to engage in them. There is thus a strict firewall between military affairs, on the one hand, and policy-political affairs on the other.Typically, the chief worry is that the military breaches this bargain: that, for example, a general starts speaking out against elected officials’ policies in ways that undermine civilian control. This is not a hypothetical fear in the United States, with the most famous such example being Gen. Douglas MacArthur’s insubordination during the Korean War. Thankfully, not even MacArthur attempted the worst-case version of military overstep — a coup.But in backsliding democracies like the modern United States, where the chief executive is attempting an anti-democratic power grab, the military poses a very different kind of threat to democracy — in fact, something akin to the exact opposite of the typical scenario.In such cases, the issue isn’t the military inserting itself into politics but rather the civilians dragging them into it in ways that upset the democratic political order. The worst-case scenario is that the military acts on presidential directives to use force against domestic dissenters, destroying democracy not by ignoring civilian orders, but by following them.There are two ways to arrive at such a worst-case scenario, both of which are in evidence in the early days of Trump 2.0.First is politicization: an intentional attack on the constraints against partisan activity inside the professional ranks.Many of Pete Hegseth’s major moves as secretary of defense fit this bill, including his decisions to fire nonwhite and female generals seen as politically unreliable and his effort to undermine the independence of the military’s lawyers. The breaches in protocol at Fort Bragg are both consequences and causes of politicization: They could only happen in an environment of loosened constraint, and they might encourage more overt political action if gone unpunished.The second pathway to breakdown is the weaponization of professionalism against itself. Here, Trump exploits the military’s deference to politicians by ordering it to engage in undemocraticactivities. In practice, this looks a lot like the LA deployments, and, more specifically, the lack of any visible military pushback. While the military readily agreeing to deployments is normally a good sign — that civilian control is holding — these aren’t normal times. And this isn’t a normal deployment, but rather one that comes uncomfortably close to the military being ordered to assist in repressing overwhelmingly peaceful demonstrations against executive abuses of power.“It’s really been pretty uncommon to use the military for law enforcement,” says David Burbach, another Naval War College professor. “This is really bringing the military into frontline law enforcement when. … these are really not huge disturbances.”This, then, is the crisis: an incremental and slow-rolling effort by the Trump administration to erode the norms and procedures designed to prevent the military from being used as a tool of domestic repression. Is it time to panic?Among the experts I spoke with, there was consensus that the military’s professional and nonpartisan ethos was weakening. This isn’t just because of Trump, but his terms — the first to a degree, and now the second acutely — are major stressors.Yet there was no consensus on just how much military nonpartisanship has eroded — that is, how close we are to a moment when the US military might be willing to follow obviously authoritarian orders.For all its faults, the US military’s professional ethos is a really important part of its identity and self-conception. While few soldiers may actually read Sam Huntington or similar scholars, the general idea that they serve the people and the republic is a bedrock principle among the ranks. There is a reason why the United States has never, in over 250 years of governance, experienced a military coup — or even come particularly close to one.In theory, this ethos should also galvanize resistance to Trump’s efforts at politicization. Soldiers are not unthinking automatons: While they are trained to follow commands, they are explicitly obligated to refuse illegal orders, even coming from the president. The more aggressive Trump’s efforts to use the military as a tool of repression gets, the more likely there is to be resistance.Or, at least theoretically.The truth is that we don’t really know how the US military will respond to a situation like this. Like so many of Trump’s second-term policies, their efforts to bend the military to their will are unprecedented — actions with no real parallel in the modern history of the American military. Experts can only make informed guesses, based on their sense of US military culture as well as comparisons to historical and foreign cases.For this reason, there are probably only two things we can say with confidence.First, what we’ve seen so far is not yet sufficient evidence to declare that the military is in Trump’s thrall. The signs of decay are too limited to ground any conclusions that the longstanding professional norm is entirely gone.“We have seen a few things that are potentially alarming about erosion of the military’s non-partisan norm. But not in a way that’s definitive at this point,” Blankshain says.Second, the stressors on this tradition are going to keep piling on. Trump’s record makes it exceptionally clear that he wants the military to serve him personally — and that he, and Hegseth, will keep working to make it so. This means we really are in the midst of a quiet crisis, and will likely remain so for the foreseeable future.“The fact that he’s getting the troops to cheer for booing Democratic leaders at a time when there’s actuallya blue city and a blue state…he is ordering the troops to take a side,” Saideman says. “There may not be a coherent plan behind this. But there are a lot of things going on that are all in the same direction.”See More: Politics
    #trumpampamp8217s #military #parade #warning
    Trump’s military parade is a warning
    Donald Trump’s military parade in Washington this weekend — a show of force in the capital that just happens to take place on the president’s birthday — smacks of authoritarian Dear Leader-style politics.Yet as disconcerting as the imagery of tanks rolling down Constitution Avenue will be, it’s not even close to Trump’s most insidious assault on the US military’s historic and democratically essential nonpartisan ethos.In fact, it’s not even the most worrying thing he’s done this week.On Tuesday, the president gave a speech at Fort Bragg, an Army base home to Special Operations Command. While presidential speeches to soldiers are not uncommon — rows of uniformed troops make a great backdrop for a foreign policy speech — they generally avoid overt partisan attacks and campaign-style rhetoric. The soldiers, for their part, are expected to be studiously neutral, laughing at jokes and such, but remaining fully impassive during any policy conversation.That’s not what happened at Fort Bragg. Trump’s speech was a partisan tirade that targeted “radical left” opponents ranging from Joe Biden to Los Angeles Mayor Karen Bass. He celebrated his deployment of Marines to Los Angeles, proposed jailing people for burning the American flag, and called on soldiers to be “aggressive” toward the protesters they encountered.The soldiers, for their part, cheered Trump and booed his enemies — as they were seemingly expected to. Reporters at Military.com, a military news service, uncovered internal communications from 82nd Airborne leadership suggesting that the crowd was screened for their political opinions.“If soldiers have political views that are in opposition to the current administration and they don’t want to be in the audience then they need to speak with their leadership and get swapped out,” one note read.To call this unusual is an understatement. I spoke with four different experts on civil-military relations, two of whom teach at the Naval War College, about the speech and its implications. To a person, they said it was a step towards politicizing the military with no real precedent in modern American history.“That is, I think, a really big red flag because it means the military’s professional ethic is breaking down internally,” says Risa Brooks, a professor at Marquette University. “Its capacity to maintain that firewall against civilian politicization may be faltering.”This may sound alarmist — like an overreading of a one-off incident — but it’s part of a bigger pattern. The totality of Trump administration policies, ranging from the parade in Washington to the LA troop deployment to Secretary of Defense Pete Hegseth’s firing of high-ranking women and officers of color, suggests a concerted effort to erode the military’s professional ethos and turn it into an institution subservient to the Trump administration’s whims. This is a signal policy aim of would-be dictators, who wish to head off the risk of a coup and ensure the armed forces’ political reliability if they are needed to repress dissent in a crisis.Steve Saideman, a professor at Carleton University, put together a list of eight different signs that a military is being politicized in this fashion. The Trump administration has exhibited six out of the eight.“The biggest theme is that we are seeing a number of checks on the executive fail at the same time — and that’s what’s making individual events seem more alarming than they might otherwise,” says Jessica Blankshain, a professor at the Naval War College.That Trump is trying to politicize the military does not mean he has succeeded. There are several signs, including Trump’s handpicked chair of the Joint Chiefs repudiating the president’s claims of a migrant invasion during congressional testimony, that the US military is resisting Trump’s politicization.But the events in Fort Bragg and Washington suggest that we are in the midst of a quiet crisis in civil-military relations in the United States — one whose implications for American democracy’s future could well be profound.The Trump crisis in civil-military relations, explainedA military is, by sheer fact of its existence, a threat to any civilian government. If you have an institution that controls the overwhelming bulk of weaponry in a society, it always has the physical capacity to seize control of the government at gunpoint. A key question for any government is how to convince the armed forces that they cannot or should not take power for themselves.Democracies typically do this through a process called “professionalization.” Soldiers are rigorously taught to think of themselves as a class of public servants, people trained to perform a specific job within defined parameters. Their ultimate loyalty is not to their generals or even individual presidents, but rather to the people and the constitutional order.Samuel Huntington, the late Harvard political scientist, is the canonical theorist of a professional military. In his book The Soldier and the State, he described optimal professionalization as a system of “objective control”: one in which the military retains autonomy in how they fight and plan for wars while deferring to politicians on whether and why to fight in the first place. In effect, they stay out of the politicians’ affairs while the politicians stay out of theirs.The idea of such a system is to emphasize to the military that they are professionals: Their responsibility isn’t deciding when to use force, but only to conduct operations as effectively as possible once ordered to engage in them. There is thus a strict firewall between military affairs, on the one hand, and policy-political affairs on the other.Typically, the chief worry is that the military breaches this bargain: that, for example, a general starts speaking out against elected officials’ policies in ways that undermine civilian control. This is not a hypothetical fear in the United States, with the most famous such example being Gen. Douglas MacArthur’s insubordination during the Korean War. Thankfully, not even MacArthur attempted the worst-case version of military overstep — a coup.But in backsliding democracies like the modern United States, where the chief executive is attempting an anti-democratic power grab, the military poses a very different kind of threat to democracy — in fact, something akin to the exact opposite of the typical scenario.In such cases, the issue isn’t the military inserting itself into politics but rather the civilians dragging them into it in ways that upset the democratic political order. The worst-case scenario is that the military acts on presidential directives to use force against domestic dissenters, destroying democracy not by ignoring civilian orders, but by following them.There are two ways to arrive at such a worst-case scenario, both of which are in evidence in the early days of Trump 2.0.First is politicization: an intentional attack on the constraints against partisan activity inside the professional ranks.Many of Pete Hegseth’s major moves as secretary of defense fit this bill, including his decisions to fire nonwhite and female generals seen as politically unreliable and his effort to undermine the independence of the military’s lawyers. The breaches in protocol at Fort Bragg are both consequences and causes of politicization: They could only happen in an environment of loosened constraint, and they might encourage more overt political action if gone unpunished.The second pathway to breakdown is the weaponization of professionalism against itself. Here, Trump exploits the military’s deference to politicians by ordering it to engage in undemocraticactivities. In practice, this looks a lot like the LA deployments, and, more specifically, the lack of any visible military pushback. While the military readily agreeing to deployments is normally a good sign — that civilian control is holding — these aren’t normal times. And this isn’t a normal deployment, but rather one that comes uncomfortably close to the military being ordered to assist in repressing overwhelmingly peaceful demonstrations against executive abuses of power.“It’s really been pretty uncommon to use the military for law enforcement,” says David Burbach, another Naval War College professor. “This is really bringing the military into frontline law enforcement when. … these are really not huge disturbances.”This, then, is the crisis: an incremental and slow-rolling effort by the Trump administration to erode the norms and procedures designed to prevent the military from being used as a tool of domestic repression. Is it time to panic?Among the experts I spoke with, there was consensus that the military’s professional and nonpartisan ethos was weakening. This isn’t just because of Trump, but his terms — the first to a degree, and now the second acutely — are major stressors.Yet there was no consensus on just how much military nonpartisanship has eroded — that is, how close we are to a moment when the US military might be willing to follow obviously authoritarian orders.For all its faults, the US military’s professional ethos is a really important part of its identity and self-conception. While few soldiers may actually read Sam Huntington or similar scholars, the general idea that they serve the people and the republic is a bedrock principle among the ranks. There is a reason why the United States has never, in over 250 years of governance, experienced a military coup — or even come particularly close to one.In theory, this ethos should also galvanize resistance to Trump’s efforts at politicization. Soldiers are not unthinking automatons: While they are trained to follow commands, they are explicitly obligated to refuse illegal orders, even coming from the president. The more aggressive Trump’s efforts to use the military as a tool of repression gets, the more likely there is to be resistance.Or, at least theoretically.The truth is that we don’t really know how the US military will respond to a situation like this. Like so many of Trump’s second-term policies, their efforts to bend the military to their will are unprecedented — actions with no real parallel in the modern history of the American military. Experts can only make informed guesses, based on their sense of US military culture as well as comparisons to historical and foreign cases.For this reason, there are probably only two things we can say with confidence.First, what we’ve seen so far is not yet sufficient evidence to declare that the military is in Trump’s thrall. The signs of decay are too limited to ground any conclusions that the longstanding professional norm is entirely gone.“We have seen a few things that are potentially alarming about erosion of the military’s non-partisan norm. But not in a way that’s definitive at this point,” Blankshain says.Second, the stressors on this tradition are going to keep piling on. Trump’s record makes it exceptionally clear that he wants the military to serve him personally — and that he, and Hegseth, will keep working to make it so. This means we really are in the midst of a quiet crisis, and will likely remain so for the foreseeable future.“The fact that he’s getting the troops to cheer for booing Democratic leaders at a time when there’s actuallya blue city and a blue state…he is ordering the troops to take a side,” Saideman says. “There may not be a coherent plan behind this. But there are a lot of things going on that are all in the same direction.”See More: Politics #trumpampamp8217s #military #parade #warning
    WWW.VOX.COM
    Trump’s military parade is a warning
    Donald Trump’s military parade in Washington this weekend — a show of force in the capital that just happens to take place on the president’s birthday — smacks of authoritarian Dear Leader-style politics (even though Trump actually got the idea after attending the 2017 Bastille Day parade in Paris).Yet as disconcerting as the imagery of tanks rolling down Constitution Avenue will be, it’s not even close to Trump’s most insidious assault on the US military’s historic and democratically essential nonpartisan ethos.In fact, it’s not even the most worrying thing he’s done this week.On Tuesday, the president gave a speech at Fort Bragg, an Army base home to Special Operations Command. While presidential speeches to soldiers are not uncommon — rows of uniformed troops make a great backdrop for a foreign policy speech — they generally avoid overt partisan attacks and campaign-style rhetoric. The soldiers, for their part, are expected to be studiously neutral, laughing at jokes and such, but remaining fully impassive during any policy conversation.That’s not what happened at Fort Bragg. Trump’s speech was a partisan tirade that targeted “radical left” opponents ranging from Joe Biden to Los Angeles Mayor Karen Bass. He celebrated his deployment of Marines to Los Angeles, proposed jailing people for burning the American flag, and called on soldiers to be “aggressive” toward the protesters they encountered.The soldiers, for their part, cheered Trump and booed his enemies — as they were seemingly expected to. Reporters at Military.com, a military news service, uncovered internal communications from 82nd Airborne leadership suggesting that the crowd was screened for their political opinions.“If soldiers have political views that are in opposition to the current administration and they don’t want to be in the audience then they need to speak with their leadership and get swapped out,” one note read.To call this unusual is an understatement. I spoke with four different experts on civil-military relations, two of whom teach at the Naval War College, about the speech and its implications. To a person, they said it was a step towards politicizing the military with no real precedent in modern American history.“That is, I think, a really big red flag because it means the military’s professional ethic is breaking down internally,” says Risa Brooks, a professor at Marquette University. “Its capacity to maintain that firewall against civilian politicization may be faltering.”This may sound alarmist — like an overreading of a one-off incident — but it’s part of a bigger pattern. The totality of Trump administration policies, ranging from the parade in Washington to the LA troop deployment to Secretary of Defense Pete Hegseth’s firing of high-ranking women and officers of color, suggests a concerted effort to erode the military’s professional ethos and turn it into an institution subservient to the Trump administration’s whims. This is a signal policy aim of would-be dictators, who wish to head off the risk of a coup and ensure the armed forces’ political reliability if they are needed to repress dissent in a crisis.Steve Saideman, a professor at Carleton University, put together a list of eight different signs that a military is being politicized in this fashion. The Trump administration has exhibited six out of the eight.“The biggest theme is that we are seeing a number of checks on the executive fail at the same time — and that’s what’s making individual events seem more alarming than they might otherwise,” says Jessica Blankshain, a professor at the Naval War College (speaking not for the military but in a personal capacity).That Trump is trying to politicize the military does not mean he has succeeded. There are several signs, including Trump’s handpicked chair of the Joint Chiefs repudiating the president’s claims of a migrant invasion during congressional testimony, that the US military is resisting Trump’s politicization.But the events in Fort Bragg and Washington suggest that we are in the midst of a quiet crisis in civil-military relations in the United States — one whose implications for American democracy’s future could well be profound.The Trump crisis in civil-military relations, explainedA military is, by sheer fact of its existence, a threat to any civilian government. If you have an institution that controls the overwhelming bulk of weaponry in a society, it always has the physical capacity to seize control of the government at gunpoint. A key question for any government is how to convince the armed forces that they cannot or should not take power for themselves.Democracies typically do this through a process called “professionalization.” Soldiers are rigorously taught to think of themselves as a class of public servants, people trained to perform a specific job within defined parameters. Their ultimate loyalty is not to their generals or even individual presidents, but rather to the people and the constitutional order.Samuel Huntington, the late Harvard political scientist, is the canonical theorist of a professional military. In his book The Soldier and the State, he described optimal professionalization as a system of “objective control”: one in which the military retains autonomy in how they fight and plan for wars while deferring to politicians on whether and why to fight in the first place. In effect, they stay out of the politicians’ affairs while the politicians stay out of theirs.The idea of such a system is to emphasize to the military that they are professionals: Their responsibility isn’t deciding when to use force, but only to conduct operations as effectively as possible once ordered to engage in them. There is thus a strict firewall between military affairs, on the one hand, and policy-political affairs on the other.Typically, the chief worry is that the military breaches this bargain: that, for example, a general starts speaking out against elected officials’ policies in ways that undermine civilian control. This is not a hypothetical fear in the United States, with the most famous such example being Gen. Douglas MacArthur’s insubordination during the Korean War. Thankfully, not even MacArthur attempted the worst-case version of military overstep — a coup.But in backsliding democracies like the modern United States, where the chief executive is attempting an anti-democratic power grab, the military poses a very different kind of threat to democracy — in fact, something akin to the exact opposite of the typical scenario.In such cases, the issue isn’t the military inserting itself into politics but rather the civilians dragging them into it in ways that upset the democratic political order. The worst-case scenario is that the military acts on presidential directives to use force against domestic dissenters, destroying democracy not by ignoring civilian orders, but by following them.There are two ways to arrive at such a worst-case scenario, both of which are in evidence in the early days of Trump 2.0.First is politicization: an intentional attack on the constraints against partisan activity inside the professional ranks.Many of Pete Hegseth’s major moves as secretary of defense fit this bill, including his decisions to fire nonwhite and female generals seen as politically unreliable and his effort to undermine the independence of the military’s lawyers. The breaches in protocol at Fort Bragg are both consequences and causes of politicization: They could only happen in an environment of loosened constraint, and they might encourage more overt political action if gone unpunished.The second pathway to breakdown is the weaponization of professionalism against itself. Here, Trump exploits the military’s deference to politicians by ordering it to engage in undemocratic (and even questionably legal) activities. In practice, this looks a lot like the LA deployments, and, more specifically, the lack of any visible military pushback. While the military readily agreeing to deployments is normally a good sign — that civilian control is holding — these aren’t normal times. And this isn’t a normal deployment, but rather one that comes uncomfortably close to the military being ordered to assist in repressing overwhelmingly peaceful demonstrations against executive abuses of power.“It’s really been pretty uncommon to use the military for law enforcement,” says David Burbach, another Naval War College professor (also speaking personally). “This is really bringing the military into frontline law enforcement when. … these are really not huge disturbances.”This, then, is the crisis: an incremental and slow-rolling effort by the Trump administration to erode the norms and procedures designed to prevent the military from being used as a tool of domestic repression. Is it time to panic?Among the experts I spoke with, there was consensus that the military’s professional and nonpartisan ethos was weakening. This isn’t just because of Trump, but his terms — the first to a degree, and now the second acutely — are major stressors.Yet there was no consensus on just how much military nonpartisanship has eroded — that is, how close we are to a moment when the US military might be willing to follow obviously authoritarian orders.For all its faults, the US military’s professional ethos is a really important part of its identity and self-conception. While few soldiers may actually read Sam Huntington or similar scholars, the general idea that they serve the people and the republic is a bedrock principle among the ranks. There is a reason why the United States has never, in over 250 years of governance, experienced a military coup — or even come particularly close to one.In theory, this ethos should also galvanize resistance to Trump’s efforts at politicization. Soldiers are not unthinking automatons: While they are trained to follow commands, they are explicitly obligated to refuse illegal orders, even coming from the president. The more aggressive Trump’s efforts to use the military as a tool of repression gets, the more likely there is to be resistance.Or, at least theoretically.The truth is that we don’t really know how the US military will respond to a situation like this. Like so many of Trump’s second-term policies, their efforts to bend the military to their will are unprecedented — actions with no real parallel in the modern history of the American military. Experts can only make informed guesses, based on their sense of US military culture as well as comparisons to historical and foreign cases.For this reason, there are probably only two things we can say with confidence.First, what we’ve seen so far is not yet sufficient evidence to declare that the military is in Trump’s thrall. The signs of decay are too limited to ground any conclusions that the longstanding professional norm is entirely gone.“We have seen a few things that are potentially alarming about erosion of the military’s non-partisan norm. But not in a way that’s definitive at this point,” Blankshain says.Second, the stressors on this tradition are going to keep piling on. Trump’s record makes it exceptionally clear that he wants the military to serve him personally — and that he, and Hegseth, will keep working to make it so. This means we really are in the midst of a quiet crisis, and will likely remain so for the foreseeable future.“The fact that he’s getting the troops to cheer for booing Democratic leaders at a time when there’s actually [a deployment to] a blue city and a blue state…he is ordering the troops to take a side,” Saideman says. “There may not be a coherent plan behind this. But there are a lot of things going on that are all in the same direction.”See More: Politics
    0 Commenti 0 condivisioni
  • Decoding The SVG <code>path</code> Element: Line Commands

    In a previous article, we looked at some practical examples of how to code SVG by hand. In that guide, we covered the basics of the SVG elements rect, circle, ellipse, line, polyline, and polygon.
    This time around, we are going to tackle a more advanced topic, the absolute powerhouse of SVG elements: path. Don’t get me wrong; I still stand by my point that image paths are better drawn in vector programs than coded. But when it comes to technical drawings and data visualizations, the path element unlocks a wide array of possibilities and opens up the world of hand-coded SVGs.
    The path syntax can be really complex. We’re going to tackle it in two separate parts. In this first installment, we’re learning all about straight and angular paths. In the second part, we’ll make lines bend, twist, and turn.
    Required Knowledge And Guide Structure
    Note: If you are unfamiliar with the basics of SVG, such as the subject of viewBox and the basic syntax of the simple elements, I recommend reading my guide before diving into this one. You should also familiarize yourself with <text> if you want to understand each line of code in the examples.
    Before we get started, I want to quickly recap how I code SVG using JavaScript. I don’t like dealing with numbers and math, and reading SVG Code with numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and write out, I have a much better time with this type of code, and I think you will, too.
    The goal of this article is more about understanding path syntax than it is about doing placement or how to leverage loops and other more basic things. So, I will not run you through the entire setup of each example. I’ll instead share snippets of the code, but they may be slightly adjusted from the CodePen or simplified to make this article easier to read. However, if there are specific questions about code that are not part of the text in the CodePen demos, the comment section is open.
    To keep this all framework-agnostic, the code is written in vanilla JavaScript.
    Setting Up For Success
    As the path element relies on our understanding of some of the coordinates we plug into the commands, I think it is a lot easier if we have a bit of visual orientation. So, all of the examples will be coded on top of a visual representation of a traditional viewBox setup with the origin in the top-left corner, then moves diagonally down to. The command is: M10 10 L100 100.
    The blue line is horizontal. It starts atand should end at. We could use the L command, but we’d have to write 55 again. So, instead, we write M10 55 H100, and then SVG knows to look back at the y value of M for the y value of H.
    It’s the same thing for the green line, but when we use the V command, SVG knows to refer back to the x value of M for the x value of V.
    If we compare the resulting horizontal path with the same implementation in a <line> element, we may

    Notice how much more efficient path can be, and
    Remove quite a bit of meaning for anyone who doesn’t speak path.

    Because, as we look at these strings, one of them is called “line”. And while the rest doesn’t mean anything out of context, the line definitely conjures a specific image in our heads.
    <path d="M 10 55 H 100" />
    <line x1="10" y1="55" x2="100" y2="55" />

    Making Polygons And Polylines With Z
    In the previous section, we learned how path can behave like <line>, which is pretty cool. But it can do more. It can also act like polyline and polygon.
    Remember, how those two basically work the same, but polygon connects the first and last point, while polyline does not? The path element can do the same thing. There is a separate command to close the path with a line, which is the Z command.

    const polyline2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y};
    const polygon2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y} Z;

    So, let’s see this in action and create a repeating triangle shape. Every odd time, it’s open, and every even time, it’s closed. Pretty neat!
    See the Pen Alternating Trianglesby Myriam.
    When it comes to comparing path versus polygon and polyline, the other tags tell us about their names, but I would argue that fewer people know what a polygon is versus what a line is. The argument to use these two tags over path for legibility is weak, in my opinion, and I guess you’d probably agree that this looks like equal levels of meaningless string given to an SVG element.
    <path d="M0 0 L86.6 50 L0 100 Z" />
    <polygon points="0,0 86.6,50 0,100" />

    <path d="M0 0 L86.6 50 L0 100" />
    <polyline points="0,0 86.6,50 0,100" />

    Relative Commands: m, l, h, v
    All of the line commands exist in absolute and relative versions. The difference is that the relative commands are lowercase, e.g., m, l, h, and v. The relative commands are always relative to the last point, so instead of declaring an x value, you’re declaring a dx value, saying this is how many units you’re moving.
    Before we look at the example visually, I want you to look at the following three-line commands. Try not to look at the CodePen beforehand.
    const lines =;

    As I mentioned, I hate looking at numbers without meaning, but there is one number whose meaning is pretty constant in most contexts: 0. Seeing a 0 in combination with a command I just learned means relative manages to instantly tell me that nothing is happening. Seeing l 0 20 by itself tells me that this line only moves along one axis instead of two.
    And looking at that entire blue path command, the repeated 20 value gives me a sense that the shape might have some regularity to it. The first path does a bit of that by repeating 10 and 30. But the third? As someone who can’t do math in my head, that third string gives me nothing.
    Now, you might be surprised, but they all draw the same shape, just in different places.
    See the Pen SVG Compound Pathsby Myriam.
    So, how valuable is it that we can recognize the regularity in the blue path? Not very, in my opinion. In some cases, going with the relative value is easier than an absolute one. In other cases, the absolute is king. Neither is better nor worse.
    And, in all cases, that previous example would be much more efficient if it were set up with a variable for the gap, a variable for the shape size, and a function to generate the path definition that’s called from within a loop so it can take in the index to properly calculate the start point.

    Jumping Points: How To Make Compound Paths
    Another very useful thing is something you don’t see visually in the previous CodePen, but it relates to the grid and its code.
    I snuck in a grid drawing update.
    With the method used in earlier examples, using line to draw the grid, the above CodePen would’ve rendered the grid with 14 separate elements. If you go and inspect the final code of that last CodePen, you’ll notice that there is just a single path element within the .grid group.
    It looks like this, which is not fun to look at but holds the secret to how it’s possible:

    <path d="M0 0 H110 M0 10 H110 M0 20 H110 M0 30 H110 M0 0 V45 M10 0 V45 M20 0 V45 M30 0 V45 M40 0 V45 M50 0 V45 M60 0 V45 M70 0 V45 M80 0 V45 M90 0 V45" stroke="currentColor" stroke-width="0.2" fill="none"></path>

    If we take a close look, we may notice that there are multiple M commands. This is the magic of compound paths.
    Since the M/m commands don’t actually draw and just place the cursor, a path can have jumps.

    So, whenever we have multiple paths that share common styling and don’t need to have separate interactions, we can just chain them together to make our code shorter.
    Coming Up Next
    Armed with this knowledge, we’re now able to replace line, polyline, and polygon with path commands and combine them in compound paths. But there is so much more to uncover because path doesn’t just offer foreign-language versions of lines but also gives us the option to code circles and ellipses that have open space and can sometimes also bend, twist, and turn. We’ll refer to those as curves and arcs, and discuss them more explicitly in the next article.
    Further Reading On SmashingMag

    “Mastering SVG Arcs,” Akshay Gupta
    “Accessible SVGs: Perfect Patterns For Screen Reader Users,” Carie Fisher
    “Easy SVG Customization And Animation: A Practical Guide,” Adrian Bece
    “Magical SVG Techniques,” Cosima Mielke
    #decoding #svg #ampltcodeampgtpathampltcodeampgt #element #line
    Decoding The SVG <code>path</code> Element: Line Commands
    In a previous article, we looked at some practical examples of how to code SVG by hand. In that guide, we covered the basics of the SVG elements rect, circle, ellipse, line, polyline, and polygon. This time around, we are going to tackle a more advanced topic, the absolute powerhouse of SVG elements: path. Don’t get me wrong; I still stand by my point that image paths are better drawn in vector programs than coded. But when it comes to technical drawings and data visualizations, the path element unlocks a wide array of possibilities and opens up the world of hand-coded SVGs. The path syntax can be really complex. We’re going to tackle it in two separate parts. In this first installment, we’re learning all about straight and angular paths. In the second part, we’ll make lines bend, twist, and turn. Required Knowledge And Guide Structure Note: If you are unfamiliar with the basics of SVG, such as the subject of viewBox and the basic syntax of the simple elements, I recommend reading my guide before diving into this one. You should also familiarize yourself with <text> if you want to understand each line of code in the examples. Before we get started, I want to quickly recap how I code SVG using JavaScript. I don’t like dealing with numbers and math, and reading SVG Code with numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and write out, I have a much better time with this type of code, and I think you will, too. The goal of this article is more about understanding path syntax than it is about doing placement or how to leverage loops and other more basic things. So, I will not run you through the entire setup of each example. I’ll instead share snippets of the code, but they may be slightly adjusted from the CodePen or simplified to make this article easier to read. However, if there are specific questions about code that are not part of the text in the CodePen demos, the comment section is open. To keep this all framework-agnostic, the code is written in vanilla JavaScript. Setting Up For Success As the path element relies on our understanding of some of the coordinates we plug into the commands, I think it is a lot easier if we have a bit of visual orientation. So, all of the examples will be coded on top of a visual representation of a traditional viewBox setup with the origin in the top-left corner, then moves diagonally down to. The command is: M10 10 L100 100. The blue line is horizontal. It starts atand should end at. We could use the L command, but we’d have to write 55 again. So, instead, we write M10 55 H100, and then SVG knows to look back at the y value of M for the y value of H. It’s the same thing for the green line, but when we use the V command, SVG knows to refer back to the x value of M for the x value of V. If we compare the resulting horizontal path with the same implementation in a <line> element, we may Notice how much more efficient path can be, and Remove quite a bit of meaning for anyone who doesn’t speak path. Because, as we look at these strings, one of them is called “line”. And while the rest doesn’t mean anything out of context, the line definitely conjures a specific image in our heads. <path d="M 10 55 H 100" /> <line x1="10" y1="55" x2="100" y2="55" /> Making Polygons And Polylines With Z In the previous section, we learned how path can behave like <line>, which is pretty cool. But it can do more. It can also act like polyline and polygon. Remember, how those two basically work the same, but polygon connects the first and last point, while polyline does not? The path element can do the same thing. There is a separate command to close the path with a line, which is the Z command. const polyline2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y}; const polygon2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y} Z; So, let’s see this in action and create a repeating triangle shape. Every odd time, it’s open, and every even time, it’s closed. Pretty neat! See the Pen Alternating Trianglesby Myriam. When it comes to comparing path versus polygon and polyline, the other tags tell us about their names, but I would argue that fewer people know what a polygon is versus what a line is. The argument to use these two tags over path for legibility is weak, in my opinion, and I guess you’d probably agree that this looks like equal levels of meaningless string given to an SVG element. <path d="M0 0 L86.6 50 L0 100 Z" /> <polygon points="0,0 86.6,50 0,100" /> <path d="M0 0 L86.6 50 L0 100" /> <polyline points="0,0 86.6,50 0,100" /> Relative Commands: m, l, h, v All of the line commands exist in absolute and relative versions. The difference is that the relative commands are lowercase, e.g., m, l, h, and v. The relative commands are always relative to the last point, so instead of declaring an x value, you’re declaring a dx value, saying this is how many units you’re moving. Before we look at the example visually, I want you to look at the following three-line commands. Try not to look at the CodePen beforehand. const lines =; As I mentioned, I hate looking at numbers without meaning, but there is one number whose meaning is pretty constant in most contexts: 0. Seeing a 0 in combination with a command I just learned means relative manages to instantly tell me that nothing is happening. Seeing l 0 20 by itself tells me that this line only moves along one axis instead of two. And looking at that entire blue path command, the repeated 20 value gives me a sense that the shape might have some regularity to it. The first path does a bit of that by repeating 10 and 30. But the third? As someone who can’t do math in my head, that third string gives me nothing. Now, you might be surprised, but they all draw the same shape, just in different places. See the Pen SVG Compound Pathsby Myriam. So, how valuable is it that we can recognize the regularity in the blue path? Not very, in my opinion. In some cases, going with the relative value is easier than an absolute one. In other cases, the absolute is king. Neither is better nor worse. And, in all cases, that previous example would be much more efficient if it were set up with a variable for the gap, a variable for the shape size, and a function to generate the path definition that’s called from within a loop so it can take in the index to properly calculate the start point. Jumping Points: How To Make Compound Paths Another very useful thing is something you don’t see visually in the previous CodePen, but it relates to the grid and its code. I snuck in a grid drawing update. With the method used in earlier examples, using line to draw the grid, the above CodePen would’ve rendered the grid with 14 separate elements. If you go and inspect the final code of that last CodePen, you’ll notice that there is just a single path element within the .grid group. It looks like this, which is not fun to look at but holds the secret to how it’s possible: <path d="M0 0 H110 M0 10 H110 M0 20 H110 M0 30 H110 M0 0 V45 M10 0 V45 M20 0 V45 M30 0 V45 M40 0 V45 M50 0 V45 M60 0 V45 M70 0 V45 M80 0 V45 M90 0 V45" stroke="currentColor" stroke-width="0.2" fill="none"></path> If we take a close look, we may notice that there are multiple M commands. This is the magic of compound paths. Since the M/m commands don’t actually draw and just place the cursor, a path can have jumps. So, whenever we have multiple paths that share common styling and don’t need to have separate interactions, we can just chain them together to make our code shorter. Coming Up Next Armed with this knowledge, we’re now able to replace line, polyline, and polygon with path commands and combine them in compound paths. But there is so much more to uncover because path doesn’t just offer foreign-language versions of lines but also gives us the option to code circles and ellipses that have open space and can sometimes also bend, twist, and turn. We’ll refer to those as curves and arcs, and discuss them more explicitly in the next article. Further Reading On SmashingMag “Mastering SVG Arcs,” Akshay Gupta “Accessible SVGs: Perfect Patterns For Screen Reader Users,” Carie Fisher “Easy SVG Customization And Animation: A Practical Guide,” Adrian Bece “Magical SVG Techniques,” Cosima Mielke #decoding #svg #ampltcodeampgtpathampltcodeampgt #element #line
    SMASHINGMAGAZINE.COM
    Decoding The SVG <code>path</code> Element: Line Commands
    In a previous article, we looked at some practical examples of how to code SVG by hand. In that guide, we covered the basics of the SVG elements rect, circle, ellipse, line, polyline, and polygon (and also g). This time around, we are going to tackle a more advanced topic, the absolute powerhouse of SVG elements: path. Don’t get me wrong; I still stand by my point that image paths are better drawn in vector programs than coded (unless you’re the type of creative who makes non-logical visual art in code — then go forth and create awe-inspiring wonders; you’re probably not the audience of this article). But when it comes to technical drawings and data visualizations, the path element unlocks a wide array of possibilities and opens up the world of hand-coded SVGs. The path syntax can be really complex. We’re going to tackle it in two separate parts. In this first installment, we’re learning all about straight and angular paths. In the second part, we’ll make lines bend, twist, and turn. Required Knowledge And Guide Structure Note: If you are unfamiliar with the basics of SVG, such as the subject of viewBox and the basic syntax of the simple elements (rect, line, g, and so on), I recommend reading my guide before diving into this one. You should also familiarize yourself with <text> if you want to understand each line of code in the examples. Before we get started, I want to quickly recap how I code SVG using JavaScript. I don’t like dealing with numbers and math, and reading SVG Code with numbers filled into every attribute makes me lose all understanding of it. By giving coordinates names and having all my math easy to parse and write out, I have a much better time with this type of code, and I think you will, too. The goal of this article is more about understanding path syntax than it is about doing placement or how to leverage loops and other more basic things. So, I will not run you through the entire setup of each example. I’ll instead share snippets of the code, but they may be slightly adjusted from the CodePen or simplified to make this article easier to read. However, if there are specific questions about code that are not part of the text in the CodePen demos, the comment section is open. To keep this all framework-agnostic, the code is written in vanilla JavaScript (though, really, TypeScript is your friend the more complicated your SVG becomes, and I missed it when writing some of these). Setting Up For Success As the path element relies on our understanding of some of the coordinates we plug into the commands, I think it is a lot easier if we have a bit of visual orientation. So, all of the examples will be coded on top of a visual representation of a traditional viewBox setup with the origin in the top-left corner (so, values in the shape of 0 0 ${width} ${height}. I added text labels as well to make it easier to point you to specific areas within the grid. Please note that I recommend being careful when adding text within the <text> element in SVG if you want your text to be accessible. If the graphic relies on text scaling like the rest of your website, it would be better to have it rendered through HTML. But for our examples here, it should be sufficient. So, this is what we’ll be plotting on top of: See the Pen SVG Viewbox Grid Visual [forked] by Myriam. Alright, we now have a ViewBox Visualizing Grid. I think we’re ready for our first session with the beast. Enter path And The All-Powerful d Attribute The <path> element has a d attribute, which speaks its own language. So, within d, you’re talking in terms of “commands”. When I think of non-path versus path elements, I like to think that the reason why we have to write much more complex drawing instructions is this: All non-path elements are just dumber paths. In the background, they have one pre-drawn path shape that they will always render based on a few parameters you pass in. But path has no default shape. The shape logic has to be exposed to you, while it can be neatly hidden away for all other elements. Let’s learn about those commands. Where It All Begins: M The first, which is where each path begins, is the M command, which moves the pen to a point. This command places your starting point, but it does not draw a single thing. A path with just an M command is an auto-delete when cleaning up SVG files. It takes two arguments: the x and y coordinates of your start position. const uselessPathCommand = `M${start.x} ${start.y}`; Basic Line Commands: M , L, H, V These are fun and easy: L, H, and V, all draw a line from the current point to the point specified. L takes two arguments, the x and y positions of the point you want to draw to. const pathCommandL = `M${start.x} ${start.y} L${end.x} ${end.y}`; H and V, on the other hand, only take one argument because they are only drawing a line in one direction. For H, you specify the x position, and for V, you specify the y position. The other value is implied. const pathCommandH = `M${start.x} ${start.y} H${end.x}`; const pathCommandV = `M${start.x} ${start.y} V${end.y}`; To visualize how this works, I created a function that draws the path, as well as points with labels on them, so we can see what happens. See the Pen Simple Lines with path [forked] by Myriam. We have three lines in that image. The L command is used for the red path. It starts with M at (10,10), then moves diagonally down to (100,100). The command is: M10 10 L100 100. The blue line is horizontal. It starts at (10,55) and should end at (100, 55). We could use the L command, but we’d have to write 55 again. So, instead, we write M10 55 H100, and then SVG knows to look back at the y value of M for the y value of H. It’s the same thing for the green line, but when we use the V command, SVG knows to refer back to the x value of M for the x value of V. If we compare the resulting horizontal path with the same implementation in a <line> element, we may Notice how much more efficient path can be, and Remove quite a bit of meaning for anyone who doesn’t speak path. Because, as we look at these strings, one of them is called “line”. And while the rest doesn’t mean anything out of context, the line definitely conjures a specific image in our heads. <path d="M 10 55 H 100" /> <line x1="10" y1="55" x2="100" y2="55" /> Making Polygons And Polylines With Z In the previous section, we learned how path can behave like <line>, which is pretty cool. But it can do more. It can also act like polyline and polygon. Remember, how those two basically work the same, but polygon connects the first and last point, while polyline does not? The path element can do the same thing. There is a separate command to close the path with a line, which is the Z command. const polyline2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y}; const polygon2Points = M${start.x} ${start.y} L${p1.x} ${p1.y} L${p2.x} ${p2.y} Z; So, let’s see this in action and create a repeating triangle shape. Every odd time, it’s open, and every even time, it’s closed. Pretty neat! See the Pen Alternating Triangles [forked] by Myriam. When it comes to comparing path versus polygon and polyline, the other tags tell us about their names, but I would argue that fewer people know what a polygon is versus what a line is (and probably even fewer know what a polyline is. Heck, even the program I’m writing this article in tells me polyline is not a valid word). The argument to use these two tags over path for legibility is weak, in my opinion, and I guess you’d probably agree that this looks like equal levels of meaningless string given to an SVG element. <path d="M0 0 L86.6 50 L0 100 Z" /> <polygon points="0,0 86.6,50 0,100" /> <path d="M0 0 L86.6 50 L0 100" /> <polyline points="0,0 86.6,50 0,100" /> Relative Commands: m, l, h, v All of the line commands exist in absolute and relative versions. The difference is that the relative commands are lowercase, e.g., m, l, h, and v. The relative commands are always relative to the last point, so instead of declaring an x value, you’re declaring a dx value, saying this is how many units you’re moving. Before we look at the example visually, I want you to look at the following three-line commands. Try not to look at the CodePen beforehand. const lines = [ { d: `M10 10 L 10 30 L 30 30`, color: "var(--_red)" }, { d: `M40 10 l 0 20 l 20 0`, color: "var(--_blue)" }, { d: `M70 10 l 0 20 L 90 30`, color: "var(--_green)" } ]; As I mentioned, I hate looking at numbers without meaning, but there is one number whose meaning is pretty constant in most contexts: 0. Seeing a 0 in combination with a command I just learned means relative manages to instantly tell me that nothing is happening. Seeing l 0 20 by itself tells me that this line only moves along one axis instead of two. And looking at that entire blue path command, the repeated 20 value gives me a sense that the shape might have some regularity to it. The first path does a bit of that by repeating 10 and 30. But the third? As someone who can’t do math in my head, that third string gives me nothing. Now, you might be surprised, but they all draw the same shape, just in different places. See the Pen SVG Compound Paths [forked] by Myriam. So, how valuable is it that we can recognize the regularity in the blue path? Not very, in my opinion. In some cases, going with the relative value is easier than an absolute one. In other cases, the absolute is king. Neither is better nor worse. And, in all cases, that previous example would be much more efficient if it were set up with a variable for the gap, a variable for the shape size, and a function to generate the path definition that’s called from within a loop so it can take in the index to properly calculate the start point. Jumping Points: How To Make Compound Paths Another very useful thing is something you don’t see visually in the previous CodePen, but it relates to the grid and its code. I snuck in a grid drawing update. With the method used in earlier examples, using line to draw the grid, the above CodePen would’ve rendered the grid with 14 separate elements. If you go and inspect the final code of that last CodePen, you’ll notice that there is just a single path element within the .grid group. It looks like this, which is not fun to look at but holds the secret to how it’s possible: <path d="M0 0 H110 M0 10 H110 M0 20 H110 M0 30 H110 M0 0 V45 M10 0 V45 M20 0 V45 M30 0 V45 M40 0 V45 M50 0 V45 M60 0 V45 M70 0 V45 M80 0 V45 M90 0 V45" stroke="currentColor" stroke-width="0.2" fill="none"></path> If we take a close look, we may notice that there are multiple M commands. This is the magic of compound paths. Since the M/m commands don’t actually draw and just place the cursor, a path can have jumps. So, whenever we have multiple paths that share common styling and don’t need to have separate interactions, we can just chain them together to make our code shorter. Coming Up Next Armed with this knowledge, we’re now able to replace line, polyline, and polygon with path commands and combine them in compound paths. But there is so much more to uncover because path doesn’t just offer foreign-language versions of lines but also gives us the option to code circles and ellipses that have open space and can sometimes also bend, twist, and turn. We’ll refer to those as curves and arcs, and discuss them more explicitly in the next article. Further Reading On SmashingMag “Mastering SVG Arcs,” Akshay Gupta “Accessible SVGs: Perfect Patterns For Screen Reader Users,” Carie Fisher “Easy SVG Customization And Animation: A Practical Guide,” Adrian Bece “Magical SVG Techniques,” Cosima Mielke
    0 Commenti 0 condivisioni
  • Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS

    I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for a modern upgrade. With this pattern, the border around the active navigation item animates directly from one element to another as the user clicks on menu items. In 2025, we have much better tools to manipulate the DOM via vanilla JavaScript. New features like the View Transition API make progressive enhancement more easily achievable and handle a lot of the animation minutiae.In this tutorial, I will demonstrate two methods of creating the “moving highlight” navigation bar using plain JavaScript and CSS. The first example uses the getBoundingClientRect method to explicitly animate the border between navigation bar items when they are clicked. The second example achieves the same functionality using the new View Transition API.
    The Initial Markup
    Let’s assume that we have a single-page application where content changes without the page being reloaded. The starting HTML and CSS are your standard navigation bar with an additional div element containing an id of #highlight. We give the first navigation item a class of .active.
    See the Pen Moving Highlight Navbar Starting Markupby Blake Lundquist.
    For this version, we will position the #highlight element around the element with the .active class to create a border. We can utilize absolute positioning and animate the element across the navigation bar to create the desired effect. We’ll hide it off-screen initially by adding left: -200px and include transition styles for all properties so that any changes in the position and size of the element will happen gradually.
    #highlight {
    z-index: 0;
    position: absolute;
    height: 100%;
    width: 100px;
    left: -200px;
    border: 2px solid green;
    box-sizing: border-box;
    transition: all 0.2s ease;
    }

    Add A Boilerplate Event Handler For Click Interactions
    We want the highlight element to animate when a user changes the .active navigation item. Let’s add a click event handler to the nav element, then filter for events caused only by elements matching our desired selector. In this case, we only want to change the .active nav item if the user clicks on a link that does not already have the .active class.
    Initially, we can call console.log to ensure the handler fires only when expected:

    const navbar = document.querySelector;

    navbar.addEventListener{
    // return if the clicked element doesn't have the correct selector
    if')) {
    return;
    }

    console.log;
    });

    Open your browser console and try clicking different items in the navigation bar. You should only see "click" being logged when you select a new item in the navigation bar.
    Now that we know our event handler is working on the correct elements let’s add code to move the .active class to the navigation item that was clicked. We can use the object passed into the event handler to find the element that initialized the event and give that element a class of .active after removing it from the previously active item.

    const navbar = document.querySelector;

    navbar.addEventListener{
    // return if the clicked element doesn't have the correct selector
    if')) {
    return;
    }

    - console.log;
    + document.querySelector.classList.remove;
    + event.target.classList.add;

    });

    Our #highlight element needs to move across the navigation bar and position itself around the active item. Let’s write a function to calculate a new position and width. Since the #highlight selector has transition styles applied, it will move gradually when its position changes.
    Using getBoundingClientRect, we can get information about the position and size of an element. We calculate the width of the active navigation item and its offset from the left boundary of the parent element. Then, we assign styles to the highlight element so that its size and position match.

    // handler for moving the highlight
    const moveHighlight ==> {
    const activeNavItem = document.querySelector;
    const highlighterElement = document.querySelector;

    const width = activeNavItem.offsetWidth;

    const itemPos = activeNavItem.getBoundingClientRect;
    const navbarPos = navbar.getBoundingClientRectconst relativePosX = itemPos.left - navbarPos.left;

    const styles = {
    left: ${relativePosX}px,
    width: ${width}px,
    };

    Object.assign;
    }

    Let’s call our new function when the click event fires:

    navbar.addEventListener{
    // return if the clicked element doesn't have the correct selector
    if')) {
    return;
    }

    document.querySelector.classList.remove;
    event.target.classList.add;

    + moveHighlight;
    });

    Finally, let’s also call the function immediately so that the border moves behind our initial active item when the page first loads:
    // handler for moving the highlight
    const moveHighlight ==> {
    // ...
    }

    // display the highlight when the page loads
    moveHighlight;

    Now, the border moves across the navigation bar when a new item is selected. Try clicking the different navigation links to animate the navigation bar.
    See the Pen Moving Highlight Navbarby Blake Lundquist.
    That only took a few lines of vanilla JavaScript and could easily be extended to account for other interactions, like mouseover events. In the next section, we will explore refactoring this feature using the View Transition API.
    Using The View Transition API
    The View Transition API provides functionality to create animated transitions between website views. Under the hood, the API creates snapshots of “before” and “after” views and then handles transitioning between them. View transitions are useful for creating animations between documents, providing the native-app-like user experience featured in frameworks like Astro. However, the API also provides handlers meant for SPA-style applications. We will use it to reduce the JavaScript needed in our implementation and more easily create fallback functionality.
    For this approach, we no longer need a separate #highlight element. Instead, we can style the .active navigation item directly using pseudo-selectors and let the View Transition API handle the animation between the before-and-after UI states when a new navigation item is clicked.
    We’ll start by getting rid of the #highlight element and its associated CSS and replacing it with styles for the nav a::after pseudo-selector:
    <nav>
    - <div id="highlight"></div>
    <a href="#" class="active">Home</a>
    <a href="#services">Services</a>
    <a href="#about">About</a>
    <a href="#contact">Contact</a>
    </nav>

    - #highlight {
    - z-index: 0;
    - position: absolute;
    - height: 100%;
    - width: 0;
    - left: 0;
    - box-sizing: border-box;
    - transition: all 0.2s ease;
    - }

    + nav a::after {
    + content: " ";
    + position: absolute;
    + left: 0;
    + top: 0;
    + width: 100%;
    + height: 100%;
    + border: none;
    + box-sizing: border-box;
    + }

    For the .active class, we include the view-transition-name property, thus unlocking the magic of the View Transition API. Once we trigger the view transition and change the location of the .active navigation item in the DOM, “before” and “after” snapshots will be taken, and the browser will animate the border across the bar. We’ll give our view transition the name of highlight, but we could theoretically give it any name.
    nav a.active::after {
    border: 2px solid green;
    view-transition-name: highlight;
    }

    Once we have a selector that contains a view-transition-name property, the only remaining step is to trigger the transition using the startViewTransition method and pass in a callback function.

    const navbar = document.querySelector;

    // Change the active nav item on click
    navbar.addEventListener{

    if')) {
    return;
    }

    document.startViewTransition=> {
    document.querySelector.classList.remove;

    event.target.classList.add;
    });
    });

    Above is a revised version of the click handler. Instead of doing all the calculations for the size and position of the moving border ourselves, the View Transition API handles all of it for us. We only need to call document.startViewTransition and pass in a callback function to change the item that has the .active class!
    Adjusting The View Transition
    At this point, when clicking on a navigation link, you’ll notice that the transition works, but some strange sizing issues are visible.This sizing inconsistency is caused by aspect ratio changes during the course of the view transition. We won’t go into detail here, but Jake Archibald has a detailed explanation you can read for more information. In short, to ensure the height of the border stays uniform throughout the transition, we need to declare an explicit height for the ::view-transition-old and ::view-transition-new pseudo-selectors representing a static snapshot of the old and new view, respectively.
    ::view-transition-old{
    height: 100%;
    }

    ::view-transition-new{
    height: 100%;
    }

    Let’s do some final refactoring to tidy up our code by moving the callback to a separate function and adding a fallback for when view transitions aren’t supported:

    const navbar = document.querySelector;

    // change the item that has the .active class applied
    const setActiveElement ==> {
    document.querySelector.classList.remove;
    elem.classList.add;
    }

    // Start view transition and pass in a callback on click
    navbar.addEventListener{
    if')) {
    return;
    }

    // Fallback for browsers that don't support View Transitions:
    if{
    setActiveElement;
    return;
    }

    document.startViewTransition=> setActiveElement);
    });

    Here’s our view transition-powered navigation bar! Observe the smooth transition when you click on the different links.
    See the Pen Moving Highlight Navbar with View Transitionby Blake Lundquist.
    Conclusion
    Animations and transitions between website UI states used to require many kilobytes of external libraries, along with verbose, confusing, and error-prone code, but vanilla JavaScript and CSS have since incorporated features to achieve native-app-like interactions without breaking the bank. We demonstrated this by implementing the “moving highlight” navigation pattern using two approaches: CSS transitions combined with the getBoundingClientRectmethod and the View Transition API.
    Resources

    getBoundingClientRectmethod documentation
    View Transition API documentation
    “View Transitions: Handling Aspect Ratio Changes” by Jake Archibald
    #creating #ampampldquomoving #highlightampamprdquo #navigation #bar
    Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS
    I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for a modern upgrade. With this pattern, the border around the active navigation item animates directly from one element to another as the user clicks on menu items. In 2025, we have much better tools to manipulate the DOM via vanilla JavaScript. New features like the View Transition API make progressive enhancement more easily achievable and handle a lot of the animation minutiae.In this tutorial, I will demonstrate two methods of creating the “moving highlight” navigation bar using plain JavaScript and CSS. The first example uses the getBoundingClientRect method to explicitly animate the border between navigation bar items when they are clicked. The second example achieves the same functionality using the new View Transition API. The Initial Markup Let’s assume that we have a single-page application where content changes without the page being reloaded. The starting HTML and CSS are your standard navigation bar with an additional div element containing an id of #highlight. We give the first navigation item a class of .active. See the Pen Moving Highlight Navbar Starting Markupby Blake Lundquist. For this version, we will position the #highlight element around the element with the .active class to create a border. We can utilize absolute positioning and animate the element across the navigation bar to create the desired effect. We’ll hide it off-screen initially by adding left: -200px and include transition styles for all properties so that any changes in the position and size of the element will happen gradually. #highlight { z-index: 0; position: absolute; height: 100%; width: 100px; left: -200px; border: 2px solid green; box-sizing: border-box; transition: all 0.2s ease; } Add A Boilerplate Event Handler For Click Interactions We want the highlight element to animate when a user changes the .active navigation item. Let’s add a click event handler to the nav element, then filter for events caused only by elements matching our desired selector. In this case, we only want to change the .active nav item if the user clicks on a link that does not already have the .active class. Initially, we can call console.log to ensure the handler fires only when expected: const navbar = document.querySelector; navbar.addEventListener{ // return if the clicked element doesn't have the correct selector if')) { return; } console.log; }); Open your browser console and try clicking different items in the navigation bar. You should only see "click" being logged when you select a new item in the navigation bar. Now that we know our event handler is working on the correct elements let’s add code to move the .active class to the navigation item that was clicked. We can use the object passed into the event handler to find the element that initialized the event and give that element a class of .active after removing it from the previously active item. const navbar = document.querySelector; navbar.addEventListener{ // return if the clicked element doesn't have the correct selector if')) { return; } - console.log; + document.querySelector.classList.remove; + event.target.classList.add; }); Our #highlight element needs to move across the navigation bar and position itself around the active item. Let’s write a function to calculate a new position and width. Since the #highlight selector has transition styles applied, it will move gradually when its position changes. Using getBoundingClientRect, we can get information about the position and size of an element. We calculate the width of the active navigation item and its offset from the left boundary of the parent element. Then, we assign styles to the highlight element so that its size and position match. // handler for moving the highlight const moveHighlight ==> { const activeNavItem = document.querySelector; const highlighterElement = document.querySelector; const width = activeNavItem.offsetWidth; const itemPos = activeNavItem.getBoundingClientRect; const navbarPos = navbar.getBoundingClientRectconst relativePosX = itemPos.left - navbarPos.left; const styles = { left: ${relativePosX}px, width: ${width}px, }; Object.assign; } Let’s call our new function when the click event fires: navbar.addEventListener{ // return if the clicked element doesn't have the correct selector if')) { return; } document.querySelector.classList.remove; event.target.classList.add; + moveHighlight; }); Finally, let’s also call the function immediately so that the border moves behind our initial active item when the page first loads: // handler for moving the highlight const moveHighlight ==> { // ... } // display the highlight when the page loads moveHighlight; Now, the border moves across the navigation bar when a new item is selected. Try clicking the different navigation links to animate the navigation bar. See the Pen Moving Highlight Navbarby Blake Lundquist. That only took a few lines of vanilla JavaScript and could easily be extended to account for other interactions, like mouseover events. In the next section, we will explore refactoring this feature using the View Transition API. Using The View Transition API The View Transition API provides functionality to create animated transitions between website views. Under the hood, the API creates snapshots of “before” and “after” views and then handles transitioning between them. View transitions are useful for creating animations between documents, providing the native-app-like user experience featured in frameworks like Astro. However, the API also provides handlers meant for SPA-style applications. We will use it to reduce the JavaScript needed in our implementation and more easily create fallback functionality. For this approach, we no longer need a separate #highlight element. Instead, we can style the .active navigation item directly using pseudo-selectors and let the View Transition API handle the animation between the before-and-after UI states when a new navigation item is clicked. We’ll start by getting rid of the #highlight element and its associated CSS and replacing it with styles for the nav a::after pseudo-selector: <nav> - <div id="highlight"></div> <a href="#" class="active">Home</a> <a href="#services">Services</a> <a href="#about">About</a> <a href="#contact">Contact</a> </nav> - #highlight { - z-index: 0; - position: absolute; - height: 100%; - width: 0; - left: 0; - box-sizing: border-box; - transition: all 0.2s ease; - } + nav a::after { + content: " "; + position: absolute; + left: 0; + top: 0; + width: 100%; + height: 100%; + border: none; + box-sizing: border-box; + } For the .active class, we include the view-transition-name property, thus unlocking the magic of the View Transition API. Once we trigger the view transition and change the location of the .active navigation item in the DOM, “before” and “after” snapshots will be taken, and the browser will animate the border across the bar. We’ll give our view transition the name of highlight, but we could theoretically give it any name. nav a.active::after { border: 2px solid green; view-transition-name: highlight; } Once we have a selector that contains a view-transition-name property, the only remaining step is to trigger the transition using the startViewTransition method and pass in a callback function. const navbar = document.querySelector; // Change the active nav item on click navbar.addEventListener{ if')) { return; } document.startViewTransition=> { document.querySelector.classList.remove; event.target.classList.add; }); }); Above is a revised version of the click handler. Instead of doing all the calculations for the size and position of the moving border ourselves, the View Transition API handles all of it for us. We only need to call document.startViewTransition and pass in a callback function to change the item that has the .active class! Adjusting The View Transition At this point, when clicking on a navigation link, you’ll notice that the transition works, but some strange sizing issues are visible.This sizing inconsistency is caused by aspect ratio changes during the course of the view transition. We won’t go into detail here, but Jake Archibald has a detailed explanation you can read for more information. In short, to ensure the height of the border stays uniform throughout the transition, we need to declare an explicit height for the ::view-transition-old and ::view-transition-new pseudo-selectors representing a static snapshot of the old and new view, respectively. ::view-transition-old{ height: 100%; } ::view-transition-new{ height: 100%; } Let’s do some final refactoring to tidy up our code by moving the callback to a separate function and adding a fallback for when view transitions aren’t supported: const navbar = document.querySelector; // change the item that has the .active class applied const setActiveElement ==> { document.querySelector.classList.remove; elem.classList.add; } // Start view transition and pass in a callback on click navbar.addEventListener{ if')) { return; } // Fallback for browsers that don't support View Transitions: if{ setActiveElement; return; } document.startViewTransition=> setActiveElement); }); Here’s our view transition-powered navigation bar! Observe the smooth transition when you click on the different links. See the Pen Moving Highlight Navbar with View Transitionby Blake Lundquist. Conclusion Animations and transitions between website UI states used to require many kilobytes of external libraries, along with verbose, confusing, and error-prone code, but vanilla JavaScript and CSS have since incorporated features to achieve native-app-like interactions without breaking the bank. We demonstrated this by implementing the “moving highlight” navigation pattern using two approaches: CSS transitions combined with the getBoundingClientRectmethod and the View Transition API. Resources getBoundingClientRectmethod documentation View Transition API documentation “View Transitions: Handling Aspect Ratio Changes” by Jake Archibald #creating #ampampldquomoving #highlightampamprdquo #navigation #bar
    SMASHINGMAGAZINE.COM
    Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS
    I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for a modern upgrade. With this pattern, the border around the active navigation item animates directly from one element to another as the user clicks on menu items. In 2025, we have much better tools to manipulate the DOM via vanilla JavaScript. New features like the View Transition API make progressive enhancement more easily achievable and handle a lot of the animation minutiae. (Large preview) In this tutorial, I will demonstrate two methods of creating the “moving highlight” navigation bar using plain JavaScript and CSS. The first example uses the getBoundingClientRect method to explicitly animate the border between navigation bar items when they are clicked. The second example achieves the same functionality using the new View Transition API. The Initial Markup Let’s assume that we have a single-page application where content changes without the page being reloaded. The starting HTML and CSS are your standard navigation bar with an additional div element containing an id of #highlight. We give the first navigation item a class of .active. See the Pen Moving Highlight Navbar Starting Markup [forked] by Blake Lundquist. For this version, we will position the #highlight element around the element with the .active class to create a border. We can utilize absolute positioning and animate the element across the navigation bar to create the desired effect. We’ll hide it off-screen initially by adding left: -200px and include transition styles for all properties so that any changes in the position and size of the element will happen gradually. #highlight { z-index: 0; position: absolute; height: 100%; width: 100px; left: -200px; border: 2px solid green; box-sizing: border-box; transition: all 0.2s ease; } Add A Boilerplate Event Handler For Click Interactions We want the highlight element to animate when a user changes the .active navigation item. Let’s add a click event handler to the nav element, then filter for events caused only by elements matching our desired selector. In this case, we only want to change the .active nav item if the user clicks on a link that does not already have the .active class. Initially, we can call console.log to ensure the handler fires only when expected: const navbar = document.querySelector('nav'); navbar.addEventListener('click', function (event) { // return if the clicked element doesn't have the correct selector if (!event.target.matches('nav a:not(active)')) { return; } console.log('click'); }); Open your browser console and try clicking different items in the navigation bar. You should only see "click" being logged when you select a new item in the navigation bar. Now that we know our event handler is working on the correct elements let’s add code to move the .active class to the navigation item that was clicked. We can use the object passed into the event handler to find the element that initialized the event and give that element a class of .active after removing it from the previously active item. const navbar = document.querySelector('nav'); navbar.addEventListener('click', function (event) { // return if the clicked element doesn't have the correct selector if (!event.target.matches('nav a:not(active)')) { return; } - console.log('click'); + document.querySelector('nav a.active').classList.remove('active'); + event.target.classList.add('active'); }); Our #highlight element needs to move across the navigation bar and position itself around the active item. Let’s write a function to calculate a new position and width. Since the #highlight selector has transition styles applied, it will move gradually when its position changes. Using getBoundingClientRect, we can get information about the position and size of an element. We calculate the width of the active navigation item and its offset from the left boundary of the parent element. Then, we assign styles to the highlight element so that its size and position match. // handler for moving the highlight const moveHighlight = () => { const activeNavItem = document.querySelector('a.active'); const highlighterElement = document.querySelector('#highlight'); const width = activeNavItem.offsetWidth; const itemPos = activeNavItem.getBoundingClientRect(); const navbarPos = navbar.getBoundingClientRect() const relativePosX = itemPos.left - navbarPos.left; const styles = { left: ${relativePosX}px, width: ${width}px, }; Object.assign(highlighterElement.style, styles); } Let’s call our new function when the click event fires: navbar.addEventListener('click', function (event) { // return if the clicked element doesn't have the correct selector if (!event.target.matches('nav a:not(active)')) { return; } document.querySelector('nav a.active').classList.remove('active'); event.target.classList.add('active'); + moveHighlight(); }); Finally, let’s also call the function immediately so that the border moves behind our initial active item when the page first loads: // handler for moving the highlight const moveHighlight = () => { // ... } // display the highlight when the page loads moveHighlight(); Now, the border moves across the navigation bar when a new item is selected. Try clicking the different navigation links to animate the navigation bar. See the Pen Moving Highlight Navbar [forked] by Blake Lundquist. That only took a few lines of vanilla JavaScript and could easily be extended to account for other interactions, like mouseover events. In the next section, we will explore refactoring this feature using the View Transition API. Using The View Transition API The View Transition API provides functionality to create animated transitions between website views. Under the hood, the API creates snapshots of “before” and “after” views and then handles transitioning between them. View transitions are useful for creating animations between documents, providing the native-app-like user experience featured in frameworks like Astro. However, the API also provides handlers meant for SPA-style applications. We will use it to reduce the JavaScript needed in our implementation and more easily create fallback functionality. For this approach, we no longer need a separate #highlight element. Instead, we can style the .active navigation item directly using pseudo-selectors and let the View Transition API handle the animation between the before-and-after UI states when a new navigation item is clicked. We’ll start by getting rid of the #highlight element and its associated CSS and replacing it with styles for the nav a::after pseudo-selector: <nav> - <div id="highlight"></div> <a href="#" class="active">Home</a> <a href="#services">Services</a> <a href="#about">About</a> <a href="#contact">Contact</a> </nav> - #highlight { - z-index: 0; - position: absolute; - height: 100%; - width: 0; - left: 0; - box-sizing: border-box; - transition: all 0.2s ease; - } + nav a::after { + content: " "; + position: absolute; + left: 0; + top: 0; + width: 100%; + height: 100%; + border: none; + box-sizing: border-box; + } For the .active class, we include the view-transition-name property, thus unlocking the magic of the View Transition API. Once we trigger the view transition and change the location of the .active navigation item in the DOM, “before” and “after” snapshots will be taken, and the browser will animate the border across the bar. We’ll give our view transition the name of highlight, but we could theoretically give it any name. nav a.active::after { border: 2px solid green; view-transition-name: highlight; } Once we have a selector that contains a view-transition-name property, the only remaining step is to trigger the transition using the startViewTransition method and pass in a callback function. const navbar = document.querySelector('nav'); // Change the active nav item on click navbar.addEventListener('click', async function (event) { if (!event.target.matches('nav a:not(.active)')) { return; } document.startViewTransition(() => { document.querySelector('nav a.active').classList.remove('active'); event.target.classList.add('active'); }); }); Above is a revised version of the click handler. Instead of doing all the calculations for the size and position of the moving border ourselves, the View Transition API handles all of it for us. We only need to call document.startViewTransition and pass in a callback function to change the item that has the .active class! Adjusting The View Transition At this point, when clicking on a navigation link, you’ll notice that the transition works, but some strange sizing issues are visible. (Large preview) This sizing inconsistency is caused by aspect ratio changes during the course of the view transition. We won’t go into detail here, but Jake Archibald has a detailed explanation you can read for more information. In short, to ensure the height of the border stays uniform throughout the transition, we need to declare an explicit height for the ::view-transition-old and ::view-transition-new pseudo-selectors representing a static snapshot of the old and new view, respectively. ::view-transition-old(highlight) { height: 100%; } ::view-transition-new(highlight) { height: 100%; } Let’s do some final refactoring to tidy up our code by moving the callback to a separate function and adding a fallback for when view transitions aren’t supported: const navbar = document.querySelector('nav'); // change the item that has the .active class applied const setActiveElement = (elem) => { document.querySelector('nav a.active').classList.remove('active'); elem.classList.add('active'); } // Start view transition and pass in a callback on click navbar.addEventListener('click', async function (event) { if (!event.target.matches('nav a:not(.active)')) { return; } // Fallback for browsers that don't support View Transitions: if (!document.startViewTransition) { setActiveElement(event.target); return; } document.startViewTransition(() => setActiveElement(event.target)); }); Here’s our view transition-powered navigation bar! Observe the smooth transition when you click on the different links. See the Pen Moving Highlight Navbar with View Transition [forked] by Blake Lundquist. Conclusion Animations and transitions between website UI states used to require many kilobytes of external libraries, along with verbose, confusing, and error-prone code, but vanilla JavaScript and CSS have since incorporated features to achieve native-app-like interactions without breaking the bank. We demonstrated this by implementing the “moving highlight” navigation pattern using two approaches: CSS transitions combined with the getBoundingClientRect() method and the View Transition API. Resources getBoundingClientRect() method documentation View Transition API documentation “View Transitions: Handling Aspect Ratio Changes” by Jake Archibald
    0 Commenti 0 condivisioni
  • BenchmarkQED: Automated benchmarking of RAG systems

    One of the key use cases for generative AI involves answering questions over private datasets, with retrieval-augmented generation as the go-to framework. As new RAG techniques emerge, there’s a growing need to benchmark their performance across diverse datasets and metrics. 
    To meet this need, we’re introducing BenchmarkQED, a new suite of tools that automates RAG benchmarking at scale, available on GitHub. It includes components for query generation, evaluation, and dataset preparation, each designed to support rigorous, reproducible testing.  
    BenchmarkQED complements the RAG methods in our open-source GraphRAG library, enabling users to run a GraphRAG-style evaluation across models, metrics, and datasets. GraphRAG uses a large language model to generate and summarize entity-based knowledge graphs, producing more comprehensive and diverse answers than standard RAG for large-scale tasks. 
    In this post, we walk through the core components of BenchmarkQED that contribute to the overall benchmarking process. We also share some of the latest benchmark results comparing our LazyGraphRAG system to competing methods, including a vector-based RAG with a 1M-token context window, where the leading LazyGraphRAG configuration showed significant win rates across all combinations of quality metrics and query classes.
    In the paper, we distinguish between local queries, where answers are found in a small number of text regions, and sometimes even a single region, and global queries, which require reasoning over large portions of or even the entire dataset. 
    Conventional vector-based RAG excels at local queries because the regions containing the answer to the query resemble the query itself and can be retrieved as the nearest neighbor in the vector space of text embeddings. However, it struggles with global questions, such as, “What are the main themes of the dataset?” which require understanding dataset qualities not explicitly stated in the text.  
    AutoQ: Automated query synthesis
    This limitation motivated the development of GraphRAG a system designed to answer global queries. GraphRAG’s evaluation requirements subsequently led to the creation of AutoQ, a method for synthesizing these global queries for any dataset.
    AutoQ extends this approach by generating synthetic queries across the spectrum of queries, from local to global. It defines four distinct classes based on the source and scope of the queryforming a logical progression along the spectrum.
    Figure 1. Construction of a 2×2 design space for synthetic query generation with AutoQ, showing how the four resulting query classes map onto the local-global query spectrum. 
    AutoQ can be configured to generate any number and distribution of synthetic queries along these classes, enabling consistent benchmarking across datasets without requiring user customization. Figure 2 shows the synthesis process and sample queries from each class, using an AP News dataset.
    Figure 2. Synthesis process and example query for each of the four AutoQ query classes. 

    About Microsoft Research
    Advancing science and technology to benefit humanity

    View our story

    Opens in a new tab
    AutoE: Automated evaluation framework 
    Our evaluation of GraphRAG focused on analyzing key qualities of answers to global questions. The following qualities were used for the current evaluation:

    Comprehensiveness: Does the answer address all relevant aspects of the question? 
    Diversity: Does it present varied perspectives or insights? 
    Empowerment: Does it help the reader understand and make informed judgments? 
    Relevance: Does it address what the question is specifically asking?  

    The AutoE component scales evaluation of these qualities using the LLM-as-a-Judge method. It presents pairs of answers to an LLM, along with the query and target metric, in counterbalanced order. The model determines whether the first answer wins, loses, or ties with the second. Over a set of queries, whether from AutoQ or elsewhere, this produces win rates between competing methods. When ground truth is available, AutoE can also score answers on correctness, completeness, and related metrics.
    An illustrative evaluation is shown in Figure 3. Using a dataset of 1,397 AP News articles on health and healthcare, AutoQ generated 50 queries per class . AutoE then compared LazyGraphRAG to a competing RAG method, running six trials per query across four metrics, using GPT-4.1 as a judge.
    These trial-level results were aggregated using metric-based win rates, where each trial is scored 1 for a win, 0.5 for a tie, and 0 for a loss, and then averaged to calculate the overall win rate for each RAG method.
    Figure 3. Win rates of four LazyGraphRAG configurations across methods, broken down by the AutoQ query class and averaged across AutoE’s four metrics: comprehensiveness, diversity, empowerment, and relevance. LazyGraphRAG outperforms comparison conditions where the bar is above 50%.
    The four LazyGraphRAG conditionsdiffer by query budgetand chunk size. All used GPT-4o mini for relevance tests and GPT-4o for query expansionand answer generation, except for LGR_b200_c200_mini, which used GPT-4o mini throughout.
    Comparison systems were GraphRAG , Vector RAG with 8k- and 120k-token windows, and three published methods: LightRAG, RAPTOR, and TREX. All methods were limited to the same 8k tokens for answer generation. GraphRAG Global Search used level 2 of the community hierarchy.
    LazyGraphRAG outperformed every comparison condition using the same generative model, winning all 96 comparisons, with all but one reaching statistical significance. The best overall performance came from the larger budget, smaller chunk size configuration. For DataLocal queries, the smaller budgetperformed slightly better, likely because fewer chunks were relevant. For ActivityLocal queries, the larger chunk sizehad a slight edge, likely because longer chunks provide a more coherent context.
    Competing methods performed relatively better on the query classes for which they were designed: GraphRAG Global for global queries, Vector RAG for local queries, and GraphRAG Drift Search, which combines both strategies, posed the strongest challenge overall.
    Increasing Vector RAG’s context window from 8k to 120k tokens did not improve its performance compared to LazyGraphRAG. This raised the question of how LazyGraphRAG would perform against Vector RAG with 1-million token context window containing most of the dataset.
    Figure 4 shows the follow-up experiment comparing LazyGraphRAG to Vector RAG using GPT-4.1 that enabled this comparison. Even against the 1M-token window, LazyGraphRAG achieved higher win rates across all comparisons, failing to reach significance only for the relevance of answers to DataLocal queries. These queries tend to benefit most from Vector RAG’s ranking of directly relevant chunks, making it hard for LazyGraphRAG to generate answers that have greater relevance to the query, even though these answers may be dramatically more comprehensive, diverse, and empowering overall.
    Figure 4. Win rates of LazyGraphRAG  over Vector RAG across different context window sizes, broken down by the four AutoQ query classes and four AutoE metrics: comprehensiveness, diversity, empowerment, and relevance. Bars above 50% indicate that LazyGraphRAG outperformed the comparison condition. 
    AutoD: Automated data sampling and summarization
    Text datasets have an underlying topical structure, but the depth, breadth, and connectivity of that structure can vary widely. This variability makes it difficult to evaluate RAG systems consistently, as results may reflect the idiosyncrasies of the dataset rather than the system’s general capabilities.
    The AutoD component addresses this by sampling datasets to meet a target specification, defined by the number of topic clustersand the number of samples per cluster. This creates consistency across datasets, enabling more meaningful comparisons, as structurally aligned datasets lead to comparable AutoQ queries, which in turn support consistent AutoE evaluations.
    AutoD also includes tools for summarizing input or output datasets in a way that reflects their topical coverage. These summaries play an important role in the AutoQ query synthesis process, but they can also be used more broadly, such as in prompts where context space is limited.
    Since the release of the GraphRAG paper, we’ve received many requests to share the dataset of the Behind the Tech podcast transcripts we used in our evaluation. An updated version of this dataset is now available in the BenchmarkQED repository, alongside the AP News dataset containing 1,397 health-related articles, licensed for open release.  
    We hope these datasets, together with the BenchmarkQED tools, help accelerate benchmark-driven development of RAG systems and AI question-answering. We invite the community to try them on GitHub. 
    Opens in a new tab
    #benchmarkqedautomatedbenchmarking #ofrag #systems
    BenchmarkQED: Automated benchmarking of RAG systems
    One of the key use cases for generative AI involves answering questions over private datasets, with retrieval-augmented generation as the go-to framework. As new RAG techniques emerge, there’s a growing need to benchmark their performance across diverse datasets and metrics.  To meet this need, we’re introducing BenchmarkQED, a new suite of tools that automates RAG benchmarking at scale, available on GitHub. It includes components for query generation, evaluation, and dataset preparation, each designed to support rigorous, reproducible testing.   BenchmarkQED complements the RAG methods in our open-source GraphRAG library, enabling users to run a GraphRAG-style evaluation across models, metrics, and datasets. GraphRAG uses a large language model to generate and summarize entity-based knowledge graphs, producing more comprehensive and diverse answers than standard RAG for large-scale tasks.  In this post, we walk through the core components of BenchmarkQED that contribute to the overall benchmarking process. We also share some of the latest benchmark results comparing our LazyGraphRAG system to competing methods, including a vector-based RAG with a 1M-token context window, where the leading LazyGraphRAG configuration showed significant win rates across all combinations of quality metrics and query classes. In the paper, we distinguish between local queries, where answers are found in a small number of text regions, and sometimes even a single region, and global queries, which require reasoning over large portions of or even the entire dataset.  Conventional vector-based RAG excels at local queries because the regions containing the answer to the query resemble the query itself and can be retrieved as the nearest neighbor in the vector space of text embeddings. However, it struggles with global questions, such as, “What are the main themes of the dataset?” which require understanding dataset qualities not explicitly stated in the text.   AutoQ: Automated query synthesis This limitation motivated the development of GraphRAG a system designed to answer global queries. GraphRAG’s evaluation requirements subsequently led to the creation of AutoQ, a method for synthesizing these global queries for any dataset. AutoQ extends this approach by generating synthetic queries across the spectrum of queries, from local to global. It defines four distinct classes based on the source and scope of the queryforming a logical progression along the spectrum. Figure 1. Construction of a 2×2 design space for synthetic query generation with AutoQ, showing how the four resulting query classes map onto the local-global query spectrum.  AutoQ can be configured to generate any number and distribution of synthetic queries along these classes, enabling consistent benchmarking across datasets without requiring user customization. Figure 2 shows the synthesis process and sample queries from each class, using an AP News dataset. Figure 2. Synthesis process and example query for each of the four AutoQ query classes.  About Microsoft Research Advancing science and technology to benefit humanity View our story Opens in a new tab AutoE: Automated evaluation framework  Our evaluation of GraphRAG focused on analyzing key qualities of answers to global questions. The following qualities were used for the current evaluation: Comprehensiveness: Does the answer address all relevant aspects of the question?  Diversity: Does it present varied perspectives or insights?  Empowerment: Does it help the reader understand and make informed judgments?  Relevance: Does it address what the question is specifically asking?   The AutoE component scales evaluation of these qualities using the LLM-as-a-Judge method. It presents pairs of answers to an LLM, along with the query and target metric, in counterbalanced order. The model determines whether the first answer wins, loses, or ties with the second. Over a set of queries, whether from AutoQ or elsewhere, this produces win rates between competing methods. When ground truth is available, AutoE can also score answers on correctness, completeness, and related metrics. An illustrative evaluation is shown in Figure 3. Using a dataset of 1,397 AP News articles on health and healthcare, AutoQ generated 50 queries per class . AutoE then compared LazyGraphRAG to a competing RAG method, running six trials per query across four metrics, using GPT-4.1 as a judge. These trial-level results were aggregated using metric-based win rates, where each trial is scored 1 for a win, 0.5 for a tie, and 0 for a loss, and then averaged to calculate the overall win rate for each RAG method. Figure 3. Win rates of four LazyGraphRAG configurations across methods, broken down by the AutoQ query class and averaged across AutoE’s four metrics: comprehensiveness, diversity, empowerment, and relevance. LazyGraphRAG outperforms comparison conditions where the bar is above 50%. The four LazyGraphRAG conditionsdiffer by query budgetand chunk size. All used GPT-4o mini for relevance tests and GPT-4o for query expansionand answer generation, except for LGR_b200_c200_mini, which used GPT-4o mini throughout. Comparison systems were GraphRAG , Vector RAG with 8k- and 120k-token windows, and three published methods: LightRAG, RAPTOR, and TREX. All methods were limited to the same 8k tokens for answer generation. GraphRAG Global Search used level 2 of the community hierarchy. LazyGraphRAG outperformed every comparison condition using the same generative model, winning all 96 comparisons, with all but one reaching statistical significance. The best overall performance came from the larger budget, smaller chunk size configuration. For DataLocal queries, the smaller budgetperformed slightly better, likely because fewer chunks were relevant. For ActivityLocal queries, the larger chunk sizehad a slight edge, likely because longer chunks provide a more coherent context. Competing methods performed relatively better on the query classes for which they were designed: GraphRAG Global for global queries, Vector RAG for local queries, and GraphRAG Drift Search, which combines both strategies, posed the strongest challenge overall. Increasing Vector RAG’s context window from 8k to 120k tokens did not improve its performance compared to LazyGraphRAG. This raised the question of how LazyGraphRAG would perform against Vector RAG with 1-million token context window containing most of the dataset. Figure 4 shows the follow-up experiment comparing LazyGraphRAG to Vector RAG using GPT-4.1 that enabled this comparison. Even against the 1M-token window, LazyGraphRAG achieved higher win rates across all comparisons, failing to reach significance only for the relevance of answers to DataLocal queries. These queries tend to benefit most from Vector RAG’s ranking of directly relevant chunks, making it hard for LazyGraphRAG to generate answers that have greater relevance to the query, even though these answers may be dramatically more comprehensive, diverse, and empowering overall. Figure 4. Win rates of LazyGraphRAG  over Vector RAG across different context window sizes, broken down by the four AutoQ query classes and four AutoE metrics: comprehensiveness, diversity, empowerment, and relevance. Bars above 50% indicate that LazyGraphRAG outperformed the comparison condition.  AutoD: Automated data sampling and summarization Text datasets have an underlying topical structure, but the depth, breadth, and connectivity of that structure can vary widely. This variability makes it difficult to evaluate RAG systems consistently, as results may reflect the idiosyncrasies of the dataset rather than the system’s general capabilities. The AutoD component addresses this by sampling datasets to meet a target specification, defined by the number of topic clustersand the number of samples per cluster. This creates consistency across datasets, enabling more meaningful comparisons, as structurally aligned datasets lead to comparable AutoQ queries, which in turn support consistent AutoE evaluations. AutoD also includes tools for summarizing input or output datasets in a way that reflects their topical coverage. These summaries play an important role in the AutoQ query synthesis process, but they can also be used more broadly, such as in prompts where context space is limited. Since the release of the GraphRAG paper, we’ve received many requests to share the dataset of the Behind the Tech podcast transcripts we used in our evaluation. An updated version of this dataset is now available in the BenchmarkQED repository, alongside the AP News dataset containing 1,397 health-related articles, licensed for open release.   We hope these datasets, together with the BenchmarkQED tools, help accelerate benchmark-driven development of RAG systems and AI question-answering. We invite the community to try them on GitHub.  Opens in a new tab #benchmarkqedautomatedbenchmarking #ofrag #systems
    WWW.MICROSOFT.COM
    BenchmarkQED: Automated benchmarking of RAG systems
    One of the key use cases for generative AI involves answering questions over private datasets, with retrieval-augmented generation (RAG) as the go-to framework. As new RAG techniques emerge, there’s a growing need to benchmark their performance across diverse datasets and metrics.  To meet this need, we’re introducing BenchmarkQED, a new suite of tools that automates RAG benchmarking at scale, available on GitHub (opens in new tab). It includes components for query generation, evaluation, and dataset preparation, each designed to support rigorous, reproducible testing.   BenchmarkQED complements the RAG methods in our open-source GraphRAG library, enabling users to run a GraphRAG-style evaluation across models, metrics, and datasets. GraphRAG uses a large language model (LLM) to generate and summarize entity-based knowledge graphs, producing more comprehensive and diverse answers than standard RAG for large-scale tasks.  In this post, we walk through the core components of BenchmarkQED that contribute to the overall benchmarking process. We also share some of the latest benchmark results comparing our LazyGraphRAG system to competing methods, including a vector-based RAG with a 1M-token context window, where the leading LazyGraphRAG configuration showed significant win rates across all combinations of quality metrics and query classes. In the paper, we distinguish between local queries, where answers are found in a small number of text regions, and sometimes even a single region, and global queries, which require reasoning over large portions of or even the entire dataset.  Conventional vector-based RAG excels at local queries because the regions containing the answer to the query resemble the query itself and can be retrieved as the nearest neighbor in the vector space of text embeddings. However, it struggles with global questions, such as, “What are the main themes of the dataset?” which require understanding dataset qualities not explicitly stated in the text.   AutoQ: Automated query synthesis This limitation motivated the development of GraphRAG a system designed to answer global queries. GraphRAG’s evaluation requirements subsequently led to the creation of AutoQ, a method for synthesizing these global queries for any dataset. AutoQ extends this approach by generating synthetic queries across the spectrum of queries, from local to global. It defines four distinct classes based on the source and scope of the query (Figure 1, top) forming a logical progression along the spectrum (Figure 1, bottom). Figure 1. Construction of a 2×2 design space for synthetic query generation with AutoQ, showing how the four resulting query classes map onto the local-global query spectrum.  AutoQ can be configured to generate any number and distribution of synthetic queries along these classes, enabling consistent benchmarking across datasets without requiring user customization. Figure 2 shows the synthesis process and sample queries from each class, using an AP News dataset. Figure 2. Synthesis process and example query for each of the four AutoQ query classes.  About Microsoft Research Advancing science and technology to benefit humanity View our story Opens in a new tab AutoE: Automated evaluation framework  Our evaluation of GraphRAG focused on analyzing key qualities of answers to global questions. The following qualities were used for the current evaluation: Comprehensiveness: Does the answer address all relevant aspects of the question?  Diversity: Does it present varied perspectives or insights?  Empowerment: Does it help the reader understand and make informed judgments?  Relevance: Does it address what the question is specifically asking?   The AutoE component scales evaluation of these qualities using the LLM-as-a-Judge method. It presents pairs of answers to an LLM, along with the query and target metric, in counterbalanced order. The model determines whether the first answer wins, loses, or ties with the second. Over a set of queries, whether from AutoQ or elsewhere, this produces win rates between competing methods. When ground truth is available, AutoE can also score answers on correctness, completeness, and related metrics. An illustrative evaluation is shown in Figure 3. Using a dataset of 1,397 AP News articles on health and healthcare, AutoQ generated 50 queries per class (200 total). AutoE then compared LazyGraphRAG to a competing RAG method, running six trials per query across four metrics, using GPT-4.1 as a judge. These trial-level results were aggregated using metric-based win rates, where each trial is scored 1 for a win, 0.5 for a tie, and 0 for a loss, and then averaged to calculate the overall win rate for each RAG method. Figure 3. Win rates of four LazyGraphRAG (LGR) configurations across methods, broken down by the AutoQ query class and averaged across AutoE’s four metrics: comprehensiveness, diversity, empowerment, and relevance. LazyGraphRAG outperforms comparison conditions where the bar is above 50%. The four LazyGraphRAG conditions (LGR_b200_c200, LGR_b50_c200, LGR_b50_c600, LGR_b200_c200_mini) differ by query budget (b50, b200) and chunk size (c200, c600). All used GPT-4o mini for relevance tests and GPT-4o for query expansion (to five subqueries) and answer generation, except for LGR_b200_c200_mini, which used GPT-4o mini throughout. Comparison systems were GraphRAG (Local, Global, and Drift Search), Vector RAG with 8k- and 120k-token windows, and three published methods: LightRAG (opens in new tab), RAPTOR (opens in new tab), and TREX (opens in new tab). All methods were limited to the same 8k tokens for answer generation. GraphRAG Global Search used level 2 of the community hierarchy. LazyGraphRAG outperformed every comparison condition using the same generative model (GPT-4o), winning all 96 comparisons, with all but one reaching statistical significance. The best overall performance came from the larger budget, smaller chunk size configuration (LGR_b200_c200). For DataLocal queries, the smaller budget (LGR_b50_c200) performed slightly better, likely because fewer chunks were relevant. For ActivityLocal queries, the larger chunk size (LGR_b50_c600) had a slight edge, likely because longer chunks provide a more coherent context. Competing methods performed relatively better on the query classes for which they were designed: GraphRAG Global for global queries, Vector RAG for local queries, and GraphRAG Drift Search, which combines both strategies, posed the strongest challenge overall. Increasing Vector RAG’s context window from 8k to 120k tokens did not improve its performance compared to LazyGraphRAG. This raised the question of how LazyGraphRAG would perform against Vector RAG with 1-million token context window containing most of the dataset. Figure 4 shows the follow-up experiment comparing LazyGraphRAG to Vector RAG using GPT-4.1 that enabled this comparison. Even against the 1M-token window, LazyGraphRAG achieved higher win rates across all comparisons, failing to reach significance only for the relevance of answers to DataLocal queries. These queries tend to benefit most from Vector RAG’s ranking of directly relevant chunks, making it hard for LazyGraphRAG to generate answers that have greater relevance to the query, even though these answers may be dramatically more comprehensive, diverse, and empowering overall. Figure 4. Win rates of LazyGraphRAG (LGR) over Vector RAG across different context window sizes, broken down by the four AutoQ query classes and four AutoE metrics: comprehensiveness, diversity, empowerment, and relevance. Bars above 50% indicate that LazyGraphRAG outperformed the comparison condition.  AutoD: Automated data sampling and summarization Text datasets have an underlying topical structure, but the depth, breadth, and connectivity of that structure can vary widely. This variability makes it difficult to evaluate RAG systems consistently, as results may reflect the idiosyncrasies of the dataset rather than the system’s general capabilities. The AutoD component addresses this by sampling datasets to meet a target specification, defined by the number of topic clusters (breadth) and the number of samples per cluster (depth). This creates consistency across datasets, enabling more meaningful comparisons, as structurally aligned datasets lead to comparable AutoQ queries, which in turn support consistent AutoE evaluations. AutoD also includes tools for summarizing input or output datasets in a way that reflects their topical coverage. These summaries play an important role in the AutoQ query synthesis process, but they can also be used more broadly, such as in prompts where context space is limited. Since the release of the GraphRAG paper, we’ve received many requests to share the dataset of the Behind the Tech (opens in new tab) podcast transcripts we used in our evaluation. An updated version of this dataset is now available in the BenchmarkQED repository (opens in new tab), alongside the AP News dataset containing 1,397 health-related articles, licensed for open release.   We hope these datasets, together with the BenchmarkQED tools (opens in new tab), help accelerate benchmark-driven development of RAG systems and AI question-answering. We invite the community to try them on GitHub (opens in new tab).  Opens in a new tab
    Like
    Love
    Wow
    Sad
    Angry
    487
    0 Commenti 0 condivisioni
  • Op-Ed: Could Canada benefit from adopting Finland’s architectural competition system?

    As a Canadian who has spent the last two and a half years working as an intern architect in Helsinki, these questions have been on my mind. In my current role, I have had the opportunity to participate in numerous architectural competitions arranged by Finnish municipalities and public institutions. It has been my observation that the Finnish system of open, anonymous architectural competitions consistently produces elegant and highly functional public buildings at reasonable cost and at great benefit to the lives of the everyday people for whom the projects are intended to serve. Could Canada benefit from the adoption of a similar model?
    ‘Public project’ has never been a clearly defined term and may bring to mind the image of a bustling library for some while conjuring the image of a municipal power substation for others. In the context of this discussion, I will use the term to refer to projects that are explicitly in-service of the broader public such as community centres, museums, and other cultural venues.
    Finland’s architectural competition system
    Frequented by nearly 2 million visitors per year, the Oodi Central Library in Helsinki, Finland, has become a thriving cultural hub and an internationally recognized symbol of Finnish design innovation. Designed by ALA Architects, the project was procured through a 2-stage, open, international architectural competition. Photo by NinaraIn Finland, most notable public projects begin with an architectural competition. Some are limited to invited participants only, but the majority of these competitions are open to international submissions. Importantly, the authors of any given proposal remain anonymous with regards to the jury. This ensures that all proposals are evaluated purely on quality without bias towards established firms over lesser known competitors. The project budget is known in advance to the competition entrants and cost feasibility is an important factor weighed by the jury. However, the cost for the design services to be procured from the winning entry is fixed ahead of time, preventing companies from lowballing offers in the hopes of securing an interesting commission despite the inevitable compromises in quality that come with under-resourced design work. The result: inspired, functional public spaces are the norm, not the exception. Contrasted against the sea of forgettable public architecture to be found in cities large and small across Canada, the Finnish model paints a utopic picture.
    Several award-winning projects in my current place of employment in Helsinki have been the result of successes in open architectural competitions. The origin of the firm itself stemmed from a winning competition entry for a church in a small village submitted by the firm’s founder while he was still completing his architectural studies.  At that time, many architecture firms in Finland were founded in this manner with the publicity of a competition win serving as a career launching off point for young architects. While less common today, many students and recent graduates still participate in these design competitions. On the occasion that a young practitioner wins a competition, they are required to assemble a team with the necessary expertise and qualifications to satisfy the requirements of the jury. I believe there is a direct link between the high architectural quality outcomes of these competitions and the fact that they are conducted anonymously. The opening of these competitions to submissions from companies outside of Finland further enhances the diversity of entries and fosters international interest in the goings-on of Finland’s architectural scene. Nonetheless, it is worth acknowledging that exemplary projects have also resulted from invited and privately organized competitions. Ultimately, the mindset of the client, the selection of an appropriate jury, and the existence of sufficient incentives for architects to invest significant time in their proposals play a more critical role in shaping the quality of the final outcome.
    Tikkurila Church and Housing in Vantaa, Finland, hosts a diverse range of functions including a café, community event spaces and student housing. Designed by OOPEAA in collaboration with a local builder, the project was realized as the result of a competition organized by local Finnish and Swedish parishes. Photo by Marc Goodwin
    Finland’s competition system, administered by the Finnish Association of Architects, is not limited to major public projects such as museums, libraries and city halls. A significant number of idea competitions are organized seeking compelling visions for urban masterplans. The quality of this system has received international recognition. To quote a research paper from a Swedish university on the structure, criteria and judgement process of Finnish architectural competitions, “The Finnishexperience can provide a rich information source for scholars and students studying the structure and process of competition system and architectural judgement, as well as those concerned with commissioning and financing of competitions due to innovative solutions found in the realms of urban revitalization, poverty elimination, environmental pollution, cultural and socio-spatial renewals, and democratization of design and planning process.” This has not gone entirely under the radar in Canada. According to the website of the Royal Architectural Institute of Canada, “Competitions are common in countries such as Finland, Ireland, the United Kingdom, Australia, and New Zealand. These competitions have resulted in a high quality of design as well as creating public interest in the role of architecture in national and community life.”
    Canada’s architectural competition system
    In Canada, the RAIC sets general competition guidelines while provincial and territorial architect associations are typically responsible for the oversight of any endorsed architectural competition. Although the idea of implementing European architectural competition models has been gaining traction in recent years, competitions remain relatively rare even for significant public projects. While Canada is yet to fully embrace competition systems as a powerful tool for ensuring higher quality public spaces, success stories from various corners of the country have opened up constructive conversations. In Edmonton, unconventional, competitive procurement efforts spearheaded by city architect Carol Belanger have produced some remarkable public buildings. This has not gone unnoticed in other parts of the country where consistent banality is the norm for public projects.
    Jasper Place Branch Library designed by HCMA and Dub Architects is one of several striking projects in Edmonton built under reimagined commissioning processes which broaden the pool of design practices eligible to participate and give greater weight to design quality as an evaluation criterion. Photo by Hubert Kang
    The wider applicability of competition systems as a positive mechanism for securing better public architecture has also started to receive broader discussion. In my hometown of Ottawa, this system has been used to procure several powerful monuments and, more recently, to select a design for the redevelopment of a key city block across from Parliament Hill. The volume and quality of entries, including from internationally renowned architectural practices, attests to the strengths of the open competition format.
    Render of the winning entry for the Block 2 Redevelopment in Ottawa. This 2-stage competition was overseen directly by the RAIC. Design and render by Zeidler Architecture Inc. in cooperation with David Chipperfield Architects.
    Despite these successes, there is significant room for improvement. A key barrier to wider adoption of competition practices according to the RAIC is “…that potential sponsors are not familiar with competitions or may consider the competition process to be complicated, expensive, and time consuming.” This is understandable for private actors, but an unsatisfactory answer in the case of public, tax-payer funded projects. Finland’s success has come through the normalization of competitions for public project procurement. We should endeavour to do the same. Maintaining design contribution anonymity prior to jury decision has thus far been the exception, not the norm in Canada. This reduces the credibility of the jury without improving the result. Additionally, the financing of such competitions has been piece-meal and inconsistent. For example, several world-class schools have been realized in Quebec as the result of competitions funded by a provincial investment.  With the depletion of that fund, it is no longer clear if any further schools will be commissioned in Quebec under a similar model. While high quality documentation has been produced, there is a risk that developed expertise will be lost if the team of professionals responsible for overseeing the process is not retained.
    École du Zénith in Shefford, Quebec, designed by Pelletier de Fontenay + Leclerc Architectes is one of six elegant and functional schools commission by the province through an anonymous competition process. Photo by James Brittain
    A path forward
    Now more than ever, it is essential that our public projects instill in us a sense of pride and reflect our uniquely Canadian values. This will continue to be a rare occurrence until more ambitious measures are taken to ensure the consistent realization of beautiful, innovative and functional public spaces that connect us with one another. In service of this objective, Canada should incentivize architectural competitions by normalizing their use for major public projects such as national museums, libraries and cultural centres. A dedicated Competitions Fund could be established to support provinces, territories and cities who demonstrate initiative in the pursuit of more ambitious, inspiring and equitable public projects. A National Competitions Expert could be appointed to ensure retention and dissemination of expertise. Maintaining the anonymity of competition entrants should be established as the norm. At a moment when talk of removing inter-provincial trade barriers has re-entered public discourse, why not consider striking down red tape that prevents out-of-province firms from participating in architectural competitions? Alas, one can dream. Competitions are no silver bullet. However, recent trials within our borders should give us confidence that architectural competitions are a relatively low-risk, high-reward proposition. To this end, Finland’s open, anonymous competition system offers a compelling case study from which we would be well served to take inspiration.

    Isaac Edmonds is a Canadian working for OOPEAA – Office for Peripheral Architecture in Helsinki, Finland. My observations of the Finnish competition system’s ability to consistently produce functional, beautiful buildings inform my interest in procurement methods that elevate the quality of our shared public realm.
    The post Op-Ed: Could Canada benefit from adopting Finland’s architectural competition system? appeared first on Canadian Architect.
    #oped #could #canada #benefit #adopting
    Op-Ed: Could Canada benefit from adopting Finland’s architectural competition system?
    As a Canadian who has spent the last two and a half years working as an intern architect in Helsinki, these questions have been on my mind. In my current role, I have had the opportunity to participate in numerous architectural competitions arranged by Finnish municipalities and public institutions. It has been my observation that the Finnish system of open, anonymous architectural competitions consistently produces elegant and highly functional public buildings at reasonable cost and at great benefit to the lives of the everyday people for whom the projects are intended to serve. Could Canada benefit from the adoption of a similar model? ‘Public project’ has never been a clearly defined term and may bring to mind the image of a bustling library for some while conjuring the image of a municipal power substation for others. In the context of this discussion, I will use the term to refer to projects that are explicitly in-service of the broader public such as community centres, museums, and other cultural venues. Finland’s architectural competition system Frequented by nearly 2 million visitors per year, the Oodi Central Library in Helsinki, Finland, has become a thriving cultural hub and an internationally recognized symbol of Finnish design innovation. Designed by ALA Architects, the project was procured through a 2-stage, open, international architectural competition. Photo by NinaraIn Finland, most notable public projects begin with an architectural competition. Some are limited to invited participants only, but the majority of these competitions are open to international submissions. Importantly, the authors of any given proposal remain anonymous with regards to the jury. This ensures that all proposals are evaluated purely on quality without bias towards established firms over lesser known competitors. The project budget is known in advance to the competition entrants and cost feasibility is an important factor weighed by the jury. However, the cost for the design services to be procured from the winning entry is fixed ahead of time, preventing companies from lowballing offers in the hopes of securing an interesting commission despite the inevitable compromises in quality that come with under-resourced design work. The result: inspired, functional public spaces are the norm, not the exception. Contrasted against the sea of forgettable public architecture to be found in cities large and small across Canada, the Finnish model paints a utopic picture. Several award-winning projects in my current place of employment in Helsinki have been the result of successes in open architectural competitions. The origin of the firm itself stemmed from a winning competition entry for a church in a small village submitted by the firm’s founder while he was still completing his architectural studies.  At that time, many architecture firms in Finland were founded in this manner with the publicity of a competition win serving as a career launching off point for young architects. While less common today, many students and recent graduates still participate in these design competitions. On the occasion that a young practitioner wins a competition, they are required to assemble a team with the necessary expertise and qualifications to satisfy the requirements of the jury. I believe there is a direct link between the high architectural quality outcomes of these competitions and the fact that they are conducted anonymously. The opening of these competitions to submissions from companies outside of Finland further enhances the diversity of entries and fosters international interest in the goings-on of Finland’s architectural scene. Nonetheless, it is worth acknowledging that exemplary projects have also resulted from invited and privately organized competitions. Ultimately, the mindset of the client, the selection of an appropriate jury, and the existence of sufficient incentives for architects to invest significant time in their proposals play a more critical role in shaping the quality of the final outcome. Tikkurila Church and Housing in Vantaa, Finland, hosts a diverse range of functions including a café, community event spaces and student housing. Designed by OOPEAA in collaboration with a local builder, the project was realized as the result of a competition organized by local Finnish and Swedish parishes. Photo by Marc Goodwin Finland’s competition system, administered by the Finnish Association of Architects, is not limited to major public projects such as museums, libraries and city halls. A significant number of idea competitions are organized seeking compelling visions for urban masterplans. The quality of this system has received international recognition. To quote a research paper from a Swedish university on the structure, criteria and judgement process of Finnish architectural competitions, “The Finnishexperience can provide a rich information source for scholars and students studying the structure and process of competition system and architectural judgement, as well as those concerned with commissioning and financing of competitions due to innovative solutions found in the realms of urban revitalization, poverty elimination, environmental pollution, cultural and socio-spatial renewals, and democratization of design and planning process.” This has not gone entirely under the radar in Canada. According to the website of the Royal Architectural Institute of Canada, “Competitions are common in countries such as Finland, Ireland, the United Kingdom, Australia, and New Zealand. These competitions have resulted in a high quality of design as well as creating public interest in the role of architecture in national and community life.” Canada’s architectural competition system In Canada, the RAIC sets general competition guidelines while provincial and territorial architect associations are typically responsible for the oversight of any endorsed architectural competition. Although the idea of implementing European architectural competition models has been gaining traction in recent years, competitions remain relatively rare even for significant public projects. While Canada is yet to fully embrace competition systems as a powerful tool for ensuring higher quality public spaces, success stories from various corners of the country have opened up constructive conversations. In Edmonton, unconventional, competitive procurement efforts spearheaded by city architect Carol Belanger have produced some remarkable public buildings. This has not gone unnoticed in other parts of the country where consistent banality is the norm for public projects. Jasper Place Branch Library designed by HCMA and Dub Architects is one of several striking projects in Edmonton built under reimagined commissioning processes which broaden the pool of design practices eligible to participate and give greater weight to design quality as an evaluation criterion. Photo by Hubert Kang The wider applicability of competition systems as a positive mechanism for securing better public architecture has also started to receive broader discussion. In my hometown of Ottawa, this system has been used to procure several powerful monuments and, more recently, to select a design for the redevelopment of a key city block across from Parliament Hill. The volume and quality of entries, including from internationally renowned architectural practices, attests to the strengths of the open competition format. Render of the winning entry for the Block 2 Redevelopment in Ottawa. This 2-stage competition was overseen directly by the RAIC. Design and render by Zeidler Architecture Inc. in cooperation with David Chipperfield Architects. Despite these successes, there is significant room for improvement. A key barrier to wider adoption of competition practices according to the RAIC is “…that potential sponsors are not familiar with competitions or may consider the competition process to be complicated, expensive, and time consuming.” This is understandable for private actors, but an unsatisfactory answer in the case of public, tax-payer funded projects. Finland’s success has come through the normalization of competitions for public project procurement. We should endeavour to do the same. Maintaining design contribution anonymity prior to jury decision has thus far been the exception, not the norm in Canada. This reduces the credibility of the jury without improving the result. Additionally, the financing of such competitions has been piece-meal and inconsistent. For example, several world-class schools have been realized in Quebec as the result of competitions funded by a provincial investment.  With the depletion of that fund, it is no longer clear if any further schools will be commissioned in Quebec under a similar model. While high quality documentation has been produced, there is a risk that developed expertise will be lost if the team of professionals responsible for overseeing the process is not retained. École du Zénith in Shefford, Quebec, designed by Pelletier de Fontenay + Leclerc Architectes is one of six elegant and functional schools commission by the province through an anonymous competition process. Photo by James Brittain A path forward Now more than ever, it is essential that our public projects instill in us a sense of pride and reflect our uniquely Canadian values. This will continue to be a rare occurrence until more ambitious measures are taken to ensure the consistent realization of beautiful, innovative and functional public spaces that connect us with one another. In service of this objective, Canada should incentivize architectural competitions by normalizing their use for major public projects such as national museums, libraries and cultural centres. A dedicated Competitions Fund could be established to support provinces, territories and cities who demonstrate initiative in the pursuit of more ambitious, inspiring and equitable public projects. A National Competitions Expert could be appointed to ensure retention and dissemination of expertise. Maintaining the anonymity of competition entrants should be established as the norm. At a moment when talk of removing inter-provincial trade barriers has re-entered public discourse, why not consider striking down red tape that prevents out-of-province firms from participating in architectural competitions? Alas, one can dream. Competitions are no silver bullet. However, recent trials within our borders should give us confidence that architectural competitions are a relatively low-risk, high-reward proposition. To this end, Finland’s open, anonymous competition system offers a compelling case study from which we would be well served to take inspiration. Isaac Edmonds is a Canadian working for OOPEAA – Office for Peripheral Architecture in Helsinki, Finland. My observations of the Finnish competition system’s ability to consistently produce functional, beautiful buildings inform my interest in procurement methods that elevate the quality of our shared public realm. The post Op-Ed: Could Canada benefit from adopting Finland’s architectural competition system? appeared first on Canadian Architect. #oped #could #canada #benefit #adopting
    WWW.CANADIANARCHITECT.COM
    Op-Ed: Could Canada benefit from adopting Finland’s architectural competition system?
    As a Canadian who has spent the last two and a half years working as an intern architect in Helsinki, these questions have been on my mind. In my current role, I have had the opportunity to participate in numerous architectural competitions arranged by Finnish municipalities and public institutions. It has been my observation that the Finnish system of open, anonymous architectural competitions consistently produces elegant and highly functional public buildings at reasonable cost and at great benefit to the lives of the everyday people for whom the projects are intended to serve. Could Canada benefit from the adoption of a similar model? ‘Public project’ has never been a clearly defined term and may bring to mind the image of a bustling library for some while conjuring the image of a municipal power substation for others. In the context of this discussion, I will use the term to refer to projects that are explicitly in-service of the broader public such as community centres, museums, and other cultural venues. Finland’s architectural competition system Frequented by nearly 2 million visitors per year, the Oodi Central Library in Helsinki, Finland, has become a thriving cultural hub and an internationally recognized symbol of Finnish design innovation. Designed by ALA Architects, the project was procured through a 2-stage, open, international architectural competition. Photo by Ninara (flickr, CC BY 2.0) In Finland, most notable public projects begin with an architectural competition. Some are limited to invited participants only, but the majority of these competitions are open to international submissions. Importantly, the authors of any given proposal remain anonymous with regards to the jury. This ensures that all proposals are evaluated purely on quality without bias towards established firms over lesser known competitors. The project budget is known in advance to the competition entrants and cost feasibility is an important factor weighed by the jury. However, the cost for the design services to be procured from the winning entry is fixed ahead of time, preventing companies from lowballing offers in the hopes of securing an interesting commission despite the inevitable compromises in quality that come with under-resourced design work. The result: inspired, functional public spaces are the norm, not the exception. Contrasted against the sea of forgettable public architecture to be found in cities large and small across Canada, the Finnish model paints a utopic picture. Several award-winning projects in my current place of employment in Helsinki have been the result of successes in open architectural competitions. The origin of the firm itself stemmed from a winning competition entry for a church in a small village submitted by the firm’s founder while he was still completing his architectural studies.  At that time, many architecture firms in Finland were founded in this manner with the publicity of a competition win serving as a career launching off point for young architects. While less common today, many students and recent graduates still participate in these design competitions. On the occasion that a young practitioner wins a competition, they are required to assemble a team with the necessary expertise and qualifications to satisfy the requirements of the jury. I believe there is a direct link between the high architectural quality outcomes of these competitions and the fact that they are conducted anonymously. The opening of these competitions to submissions from companies outside of Finland further enhances the diversity of entries and fosters international interest in the goings-on of Finland’s architectural scene. Nonetheless, it is worth acknowledging that exemplary projects have also resulted from invited and privately organized competitions. Ultimately, the mindset of the client, the selection of an appropriate jury, and the existence of sufficient incentives for architects to invest significant time in their proposals play a more critical role in shaping the quality of the final outcome. Tikkurila Church and Housing in Vantaa, Finland, hosts a diverse range of functions including a café, community event spaces and student housing. Designed by OOPEAA in collaboration with a local builder, the project was realized as the result of a competition organized by local Finnish and Swedish parishes. Photo by Marc Goodwin Finland’s competition system, administered by the Finnish Association of Architects (SAFA), is not limited to major public projects such as museums, libraries and city halls. A significant number of idea competitions are organized seeking compelling visions for urban masterplans. The quality of this system has received international recognition. To quote a research paper from a Swedish university on the structure, criteria and judgement process of Finnish architectural competitions, “The Finnish (competition) experience can provide a rich information source for scholars and students studying the structure and process of competition system and architectural judgement, as well as those concerned with commissioning and financing of competitions due to innovative solutions found in the realms of urban revitalization, poverty elimination, environmental pollution, cultural and socio-spatial renewals, and democratization of design and planning process.” This has not gone entirely under the radar in Canada. According to the website of the Royal Architectural Institute of Canada (RAIC), “Competitions are common in countries such as Finland, Ireland, the United Kingdom, Australia, and New Zealand. These competitions have resulted in a high quality of design as well as creating public interest in the role of architecture in national and community life.” Canada’s architectural competition system In Canada, the RAIC sets general competition guidelines while provincial and territorial architect associations are typically responsible for the oversight of any endorsed architectural competition. Although the idea of implementing European architectural competition models has been gaining traction in recent years, competitions remain relatively rare even for significant public projects. While Canada is yet to fully embrace competition systems as a powerful tool for ensuring higher quality public spaces, success stories from various corners of the country have opened up constructive conversations. In Edmonton, unconventional, competitive procurement efforts spearheaded by city architect Carol Belanger have produced some remarkable public buildings. This has not gone unnoticed in other parts of the country where consistent banality is the norm for public projects. Jasper Place Branch Library designed by HCMA and Dub Architects is one of several striking projects in Edmonton built under reimagined commissioning processes which broaden the pool of design practices eligible to participate and give greater weight to design quality as an evaluation criterion. Photo by Hubert Kang The wider applicability of competition systems as a positive mechanism for securing better public architecture has also started to receive broader discussion. In my hometown of Ottawa, this system has been used to procure several powerful monuments and, more recently, to select a design for the redevelopment of a key city block across from Parliament Hill. The volume and quality of entries, including from internationally renowned architectural practices, attests to the strengths of the open competition format. Render of the winning entry for the Block 2 Redevelopment in Ottawa. This 2-stage competition was overseen directly by the RAIC. Design and render by Zeidler Architecture Inc. in cooperation with David Chipperfield Architects. Despite these successes, there is significant room for improvement. A key barrier to wider adoption of competition practices according to the RAIC is “…that potential sponsors are not familiar with competitions or may consider the competition process to be complicated, expensive, and time consuming.” This is understandable for private actors, but an unsatisfactory answer in the case of public, tax-payer funded projects. Finland’s success has come through the normalization of competitions for public project procurement. We should endeavour to do the same. Maintaining design contribution anonymity prior to jury decision has thus far been the exception, not the norm in Canada. This reduces the credibility of the jury without improving the result. Additionally, the financing of such competitions has been piece-meal and inconsistent. For example, several world-class schools have been realized in Quebec as the result of competitions funded by a provincial investment.  With the depletion of that fund, it is no longer clear if any further schools will be commissioned in Quebec under a similar model. While high quality documentation has been produced, there is a risk that developed expertise will be lost if the team of professionals responsible for overseeing the process is not retained. École du Zénith in Shefford, Quebec, designed by Pelletier de Fontenay + Leclerc Architectes is one of six elegant and functional schools commission by the province through an anonymous competition process. Photo by James Brittain A path forward Now more than ever, it is essential that our public projects instill in us a sense of pride and reflect our uniquely Canadian values. This will continue to be a rare occurrence until more ambitious measures are taken to ensure the consistent realization of beautiful, innovative and functional public spaces that connect us with one another. In service of this objective, Canada should incentivize architectural competitions by normalizing their use for major public projects such as national museums, libraries and cultural centres. A dedicated Competitions Fund could be established to support provinces, territories and cities who demonstrate initiative in the pursuit of more ambitious, inspiring and equitable public projects. A National Competitions Expert could be appointed to ensure retention and dissemination of expertise. Maintaining the anonymity of competition entrants should be established as the norm. At a moment when talk of removing inter-provincial trade barriers has re-entered public discourse, why not consider striking down red tape that prevents out-of-province firms from participating in architectural competitions? Alas, one can dream. Competitions are no silver bullet. However, recent trials within our borders should give us confidence that architectural competitions are a relatively low-risk, high-reward proposition. To this end, Finland’s open, anonymous competition system offers a compelling case study from which we would be well served to take inspiration. Isaac Edmonds is a Canadian working for OOPEAA – Office for Peripheral Architecture in Helsinki, Finland. My observations of the Finnish competition system’s ability to consistently produce functional, beautiful buildings inform my interest in procurement methods that elevate the quality of our shared public realm. The post Op-Ed: Could Canada benefit from adopting Finland’s architectural competition system? appeared first on Canadian Architect.
    Like
    Love
    Wow
    Sad
    Angry
    451
    0 Commenti 0 condivisioni
  • Raspberry Pi Imager 1.9.4 released bringing performance improvements, bug fixes and more

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    Raspberry Pi Imager 1.9.4 released bringing performance improvements, bug fixes and more

    David Uzondu

    Neowin
    ·

    Jun 5, 2025 05:12 EDT

    Raspberry Pi Imager 1.9.4 is now out, marking the first official release in its 1.9.x series. This application, for anyone new to it, is a tool from the Raspberry Pi Foundation. It first came out in March 2020. Its main job is to make getting an operating system onto a microSD card or USB drive for any Raspberry Pi computer super simple, even if you hate the command line. It handles downloading selected OS images and writing them correctly, cutting out several manual steps that used to trip people up, like finding the right image version or using complicated disk utility tools.
    This version brings solid user interface improvements for a smoother experience, involving internal tweaks that contribute to a more polished feel. Much work went into global accessibility, adding new Korean and Georgian translations. Updates also cover Chinese, German, Spanish, Italian, and many others. Naturally, a good number of bugs got squashed, including a fix for tricky long filename issues on Windows and an issue with the Escape key in the options popup.
    Changes specific to operating systems are also clear. Windows users get an installer using Inno Setup. Its program files, installer, and uninstaller are now signed for better Windows security. For macOS, .app file naming in .dmg packages is fixed, and building the software is more reliable. Linux users can now hide system drives from the destination list, a great way to prevent accidentally wiping your main computer drives. The Linux AppImage also disables Wayland support by default.

    The full list of changes is outlined below:

    Fixed minor errors in Simplified Chinese translation
    Updated translations for German, Catalan, Spanish, Slovak, Portuguese, Hebrew, Traditional Chinese, Italian, Korean, and Georgian
    Explicitly added --tree to lsblk to hide partitions from the top-level output
    CMake now displays the version as v1.9.1
    Added support for quiet uninstallation on Windows
    Applied regex to match SSH public keys during OS customization
    Updated dependencies:

    libarchivezlibcURLnghttp2zstdxz/liblzmaWindows-specific updates:

    Switched to Inno Setup for the installer
    Added code signing for binaries, installer, and uninstaller
    Enabled administrator privileges and NSIS removal support
    Fixed a bug causing incorrect saving of long filenames

    macOS-specific updates:

    Fixed .app naming in .dmg packages
    Improved build reliability and copyright

    Linux-specific updates:

    System drives are now hidden in destination popup
    Wayland support disabled in AppImage

    General UI/UX improvements:

    Fixed OptionsPopup not handling the Esc key
    Improved QML code structure, accessibility, and linting
    Made options popup modal
    Split main UI into component files
    Added a Style singleton and ImCloseButton component

    Internationalization:

    Made "Recommended" OS string translatable
    Made "gigabytes" translatable

    Packaging improvements:

    Custom AppImage build script with Qt detection
    Custom Qt build script with unprivileged mode
    Qt 6.9.0 included
    Dependencies migrated to FetchContent system

    Build system:

    CMake version bumped to 3.22
    Various improvements and hardening applied

    Removed "Show password" checkbox in OS customization settings
    Reverted unneeded changes in long filename size calculation
    Internal refactoring and performance improvements in download and extract operations
    Added support for more archive formats via libarchive

    Lastly, it's worth noting that the system requirements have changed since version 1.9.0: macOS users will need version 11 or later; Windows users, Windows 10 or newer; Ubuntu users, version 22.04 or newer; and Debian users, Bookworm or later.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #raspberry #imager #released #bringing #performance
    Raspberry Pi Imager 1.9.4 released bringing performance improvements, bug fixes and more
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Raspberry Pi Imager 1.9.4 released bringing performance improvements, bug fixes and more David Uzondu Neowin · Jun 5, 2025 05:12 EDT Raspberry Pi Imager 1.9.4 is now out, marking the first official release in its 1.9.x series. This application, for anyone new to it, is a tool from the Raspberry Pi Foundation. It first came out in March 2020. Its main job is to make getting an operating system onto a microSD card or USB drive for any Raspberry Pi computer super simple, even if you hate the command line. It handles downloading selected OS images and writing them correctly, cutting out several manual steps that used to trip people up, like finding the right image version or using complicated disk utility tools. This version brings solid user interface improvements for a smoother experience, involving internal tweaks that contribute to a more polished feel. Much work went into global accessibility, adding new Korean and Georgian translations. Updates also cover Chinese, German, Spanish, Italian, and many others. Naturally, a good number of bugs got squashed, including a fix for tricky long filename issues on Windows and an issue with the Escape key in the options popup. Changes specific to operating systems are also clear. Windows users get an installer using Inno Setup. Its program files, installer, and uninstaller are now signed for better Windows security. For macOS, .app file naming in .dmg packages is fixed, and building the software is more reliable. Linux users can now hide system drives from the destination list, a great way to prevent accidentally wiping your main computer drives. The Linux AppImage also disables Wayland support by default. The full list of changes is outlined below: Fixed minor errors in Simplified Chinese translation Updated translations for German, Catalan, Spanish, Slovak, Portuguese, Hebrew, Traditional Chinese, Italian, Korean, and Georgian Explicitly added --tree to lsblk to hide partitions from the top-level output CMake now displays the version as v1.9.1 Added support for quiet uninstallation on Windows Applied regex to match SSH public keys during OS customization Updated dependencies: libarchivezlibcURLnghttp2zstdxz/liblzmaWindows-specific updates: Switched to Inno Setup for the installer Added code signing for binaries, installer, and uninstaller Enabled administrator privileges and NSIS removal support Fixed a bug causing incorrect saving of long filenames macOS-specific updates: Fixed .app naming in .dmg packages Improved build reliability and copyright Linux-specific updates: System drives are now hidden in destination popup Wayland support disabled in AppImage General UI/UX improvements: Fixed OptionsPopup not handling the Esc key Improved QML code structure, accessibility, and linting Made options popup modal Split main UI into component files Added a Style singleton and ImCloseButton component Internationalization: Made "Recommended" OS string translatable Made "gigabytes" translatable Packaging improvements: Custom AppImage build script with Qt detection Custom Qt build script with unprivileged mode Qt 6.9.0 included Dependencies migrated to FetchContent system Build system: CMake version bumped to 3.22 Various improvements and hardening applied Removed "Show password" checkbox in OS customization settings Reverted unneeded changes in long filename size calculation Internal refactoring and performance improvements in download and extract operations Added support for more archive formats via libarchive Lastly, it's worth noting that the system requirements have changed since version 1.9.0: macOS users will need version 11 or later; Windows users, Windows 10 or newer; Ubuntu users, version 22.04 or newer; and Debian users, Bookworm or later. Tags Report a problem with article Follow @NeowinFeed #raspberry #imager #released #bringing #performance
    WWW.NEOWIN.NET
    Raspberry Pi Imager 1.9.4 released bringing performance improvements, bug fixes and more
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Raspberry Pi Imager 1.9.4 released bringing performance improvements, bug fixes and more David Uzondu Neowin · Jun 5, 2025 05:12 EDT Raspberry Pi Imager 1.9.4 is now out, marking the first official release in its 1.9.x series. This application, for anyone new to it, is a tool from the Raspberry Pi Foundation. It first came out in March 2020. Its main job is to make getting an operating system onto a microSD card or USB drive for any Raspberry Pi computer super simple, even if you hate the command line. It handles downloading selected OS images and writing them correctly, cutting out several manual steps that used to trip people up, like finding the right image version or using complicated disk utility tools. This version brings solid user interface improvements for a smoother experience, involving internal tweaks that contribute to a more polished feel. Much work went into global accessibility, adding new Korean and Georgian translations. Updates also cover Chinese, German, Spanish, Italian, and many others. Naturally, a good number of bugs got squashed, including a fix for tricky long filename issues on Windows and an issue with the Escape key in the options popup. Changes specific to operating systems are also clear. Windows users get an installer using Inno Setup. Its program files, installer, and uninstaller are now signed for better Windows security. For macOS, .app file naming in .dmg packages is fixed, and building the software is more reliable. Linux users can now hide system drives from the destination list, a great way to prevent accidentally wiping your main computer drives. The Linux AppImage also disables Wayland support by default. The full list of changes is outlined below: Fixed minor errors in Simplified Chinese translation Updated translations for German, Catalan, Spanish, Slovak, Portuguese, Hebrew, Traditional Chinese, Italian, Korean, and Georgian Explicitly added --tree to lsblk to hide partitions from the top-level output CMake now displays the version as v1.9.1 Added support for quiet uninstallation on Windows Applied regex to match SSH public keys during OS customization Updated dependencies: libarchive (3.7.4 → 3.7.7 → 3.8.0) zlib (removed preconfigured header → updated to 1.4.1.1) cURL (8.8 → 8.11.0 → 8.13.0) nghttp2 (updated to 1.65.0) zstd (updated to 1.5.7) xz/liblzma (updated to 5.8.1) Windows-specific updates: Switched to Inno Setup for the installer Added code signing for binaries, installer, and uninstaller Enabled administrator privileges and NSIS removal support Fixed a bug causing incorrect saving of long filenames macOS-specific updates: Fixed .app naming in .dmg packages Improved build reliability and copyright Linux-specific updates: System drives are now hidden in destination popup Wayland support disabled in AppImage General UI/UX improvements: Fixed OptionsPopup not handling the Esc key Improved QML code structure, accessibility, and linting Made options popup modal Split main UI into component files Added a Style singleton and ImCloseButton component Internationalization (i18n): Made "Recommended" OS string translatable Made "gigabytes" translatable Packaging improvements: Custom AppImage build script with Qt detection Custom Qt build script with unprivileged mode Qt 6.9.0 included Dependencies migrated to FetchContent system Build system: CMake version bumped to 3.22 Various improvements and hardening applied Removed "Show password" checkbox in OS customization settings Reverted unneeded changes in long filename size calculation Internal refactoring and performance improvements in download and extract operations Added support for more archive formats via libarchive Lastly, it's worth noting that the system requirements have changed since version 1.9.0: macOS users will need version 11 or later; Windows users, Windows 10 or newer; Ubuntu users, version 22.04 or newer; and Debian users, Bookworm or later. Tags Report a problem with article Follow @NeowinFeed
    Like
    Love
    Wow
    Sad
    Angry
    258
    0 Commenti 0 condivisioni
  • How to Effectively Implement Network Segmentation: 5 Key Steps and Use Cases

    Posted on : June 3, 2025

    By

    Tech World Times

    Technology 

    Rate this post

    This article walks you through five practical steps to implement network segmentation effectively, backed by real-world use cases that showcase its value in different industries.
    Networks are constantly expanding across offices, cloud services, remote users, and connected devices. With so many moving parts, security gaps can easily form. Once attackers breach a weak point, they often move freely across the network, targeting critical systems and sensitive data.
    That’s where network segmentation comes in. It’s a practical approach to divide your network into smaller, manageable zones to control access, limit exposure, and isolate threats before they spread. But simply deploying VLANs or access rules isn’t enough. True segmentation needs planning, alignment with your business, and the right mix of technology.
    Step 1: Assess and Map Your Current Network
    Start by figuring out what’s on your network and how it communicates.

    Inventory Devices and Applications: List all system servers, user machines, IoT devices, cloud assets.
    Map Data Flows: Understand how applications and services interact. Which systems talk to each other? What ports and protocols are used?
    Identify Critical Assets: Highlight the systems that handle sensitive data, such as payment processing, health records, or intellectual property.

    Tip: Network discovery tools or NAC solutions can automate asset inventory and reveal communication paths you might miss.
    Step 2: Define Segmentation Goals and Policies
    Once you understand your environment, it’s time to set your objectives.

    Security Objectives: Do you want to reduce lateral movement, isolate sensitive systems, or meet a compliance mandate?
    Business Alignment: Segment by business unit, sensitivity of data, or risk profile-whatever makes the most operational sense.
    Compliance Requirements: PCI DSS, HIPAA, and other standards often require network segmentation.

    Example: A healthcare provider might create separate zones for patient records, lab equipment, guest Wi-Fi, and billing systems.
    Step 3: Choose the Right Segmentation Method
    Segmentation can be done in several ways. The right approach depends on your infrastructure goals and types:
    a. Physical Segmentation
    Use separate routers, switches, and cables. This offers strong isolation but can be costly and harder to scale.
    b. Logical SegmentationGroup devices into virtual segments based on function or department. It’s efficient and easier to manage in most environments.
    c. Micro segmentation
    Control access at the workload or application level using software-defined policies. Ideal for cloud or virtualized environments where you need granular control.
    d. Cloud Segmentation
    In the cloud, segmentation happens using security groups, VPCs, and IAM roles to isolate workloads and define access rules.
    Use a combination- VLANs for broader segmentation and micro segmentation for finer control where it matters.
    Step 4: Implement Controls and Monitor Traffic
    Time to put those policies into action.

    Firewalls and ACLs: Use access controls to manage what can move between zones. Block anything that isn’t explicitly allowed.
    Zero Trust Principles: Never assume trust between segments. Always validate identity and permissions.
    Monitoring and Alerts: Use your SIEM, flow monitoring tools, or NDR platform to watch for unusual traffic or policy violations.

    Common Pitfall: Avoid “allow all” rules between segments, it defeats the purpose.
    Step 5: Test, Validate, and Fine-Tune
    Even a well-designed segmentation plan can have gaps. Regular validation helps ensure it works as expected.

    Penetration Testing: Simulate attacks to check if boundaries hold.
    Review Policies: Business needs to change your segmentation strategy too.
    Performance Monitoring: Make sure segmentation doesn’t impact legitimate operations or application performance.

    Automation tools can help simplify this process and ensure consistency.
    Real-World Use Cases of Network Segmentation
    1. Healthcare – Protecting Patient Data and Devices
    Hospitals use segmentation to keep medical devices, patient records, and visitor Wi-Fi on separate zones. This prevents an infected guest device from interfering with critical systems.
    Result: Reduced attack surface and HIPAA compliance.
    2. Manufacturing – Isolating Industrial Systems
    Production environments often have fragile legacy systems. Segmenting OTfrom IT ensures ransomware or malware doesn’t disrupt manufacturing lines.
    Result: More uptime and fewer operational risks.
    3. Finance – Securing Payment Systems
    Banks and payment providers use segmentation to isolate cardholder data environmentsfrom the rest of the corporate network. This helps meet PCI DSS and keeps sensitive data protected.
    Result: Easier audits and stronger data security.
    4. Education – Managing High-Volume BYOD Traffic
    Universities segment student Wi-Fi, research labs, and administrative systems. This keeps a vulnerable student device from spreading malware to faculty or internal systems.
    Result: Safer environment for open access campuses.
    5. Cloud – Segmenting Apps and Microservices
    In the cloud, developers use security groups, VPCs, and IAM roles to isolate applications and limit who can access what. This reduces risk if one workload is compromised.
    Result: Controlled access and better cloud hygiene.
    Common Challenges

    Legacy Tech: Older devices may not support modern segmentation.
    Lack of Visibility: Hard to secure what you don’t know exists.
    Operational Hiccups: Poorly planned segmentation can block business workflows.
    Policy Complexity: Keeping access rules up to date across dynamic environments takes effort.

    Best Practices

    Start with High-Risk Areas: Prioritize zones handling sensitive data or vulnerable systems.
    Keep Documentation Updated: Maintain clear diagrams and policy records.
    Align Teams: Get buy-in from IT, security, and business units.
    Automate Where You Can: Especially for monitoring and policy enforcement.
    Review Regularly: Networks evolve- so should your segmentation.

    Final Thoughts
    Segmentation isn’t about creating walls it’s about building smart pathways. Done right, it helps you take control of your network, reduce risk, and respond faster when something goes wrong.
    It’s a foundational layer of cybersecurity that pays off in resilience, compliance, and peace of mind.
    About the Author:
    Prajwal Gowda is a cybersecurity expert with 10+ years of experience. He has built businesses and was a Business Unit Head for Compliance and Testing services. Currently, he is the Chief Technology Officer at Ampcus Cyber, leading the company’s technology strategy and innovation efforts. He has also been involved in the Payment Card Industry, Software Security Framework, ISO 27001 Controls Gap Analysis, ISMS, Risk Analysis, OCTAVE, ISO 27005, Information Security Audit and Network Security. Prajwal is a Master Trainer who has conducted 100+ cybersecurity training sessions worldwide.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #how #effectively #implement #network #segmentation
    How to Effectively Implement Network Segmentation: 5 Key Steps and Use Cases
    Posted on : June 3, 2025 By Tech World Times Technology  Rate this post This article walks you through five practical steps to implement network segmentation effectively, backed by real-world use cases that showcase its value in different industries. Networks are constantly expanding across offices, cloud services, remote users, and connected devices. With so many moving parts, security gaps can easily form. Once attackers breach a weak point, they often move freely across the network, targeting critical systems and sensitive data. That’s where network segmentation comes in. It’s a practical approach to divide your network into smaller, manageable zones to control access, limit exposure, and isolate threats before they spread. But simply deploying VLANs or access rules isn’t enough. True segmentation needs planning, alignment with your business, and the right mix of technology. Step 1: Assess and Map Your Current Network Start by figuring out what’s on your network and how it communicates. Inventory Devices and Applications: List all system servers, user machines, IoT devices, cloud assets. Map Data Flows: Understand how applications and services interact. Which systems talk to each other? What ports and protocols are used? Identify Critical Assets: Highlight the systems that handle sensitive data, such as payment processing, health records, or intellectual property. Tip: Network discovery tools or NAC solutions can automate asset inventory and reveal communication paths you might miss. Step 2: Define Segmentation Goals and Policies Once you understand your environment, it’s time to set your objectives. Security Objectives: Do you want to reduce lateral movement, isolate sensitive systems, or meet a compliance mandate? Business Alignment: Segment by business unit, sensitivity of data, or risk profile-whatever makes the most operational sense. Compliance Requirements: PCI DSS, HIPAA, and other standards often require network segmentation. Example: A healthcare provider might create separate zones for patient records, lab equipment, guest Wi-Fi, and billing systems. Step 3: Choose the Right Segmentation Method Segmentation can be done in several ways. The right approach depends on your infrastructure goals and types: a. Physical Segmentation Use separate routers, switches, and cables. This offers strong isolation but can be costly and harder to scale. b. Logical SegmentationGroup devices into virtual segments based on function or department. It’s efficient and easier to manage in most environments. c. Micro segmentation Control access at the workload or application level using software-defined policies. Ideal for cloud or virtualized environments where you need granular control. d. Cloud Segmentation In the cloud, segmentation happens using security groups, VPCs, and IAM roles to isolate workloads and define access rules. Use a combination- VLANs for broader segmentation and micro segmentation for finer control where it matters. Step 4: Implement Controls and Monitor Traffic Time to put those policies into action. Firewalls and ACLs: Use access controls to manage what can move between zones. Block anything that isn’t explicitly allowed. Zero Trust Principles: Never assume trust between segments. Always validate identity and permissions. Monitoring and Alerts: Use your SIEM, flow monitoring tools, or NDR platform to watch for unusual traffic or policy violations. Common Pitfall: Avoid “allow all” rules between segments, it defeats the purpose. Step 5: Test, Validate, and Fine-Tune Even a well-designed segmentation plan can have gaps. Regular validation helps ensure it works as expected. Penetration Testing: Simulate attacks to check if boundaries hold. Review Policies: Business needs to change your segmentation strategy too. Performance Monitoring: Make sure segmentation doesn’t impact legitimate operations or application performance. Automation tools can help simplify this process and ensure consistency. Real-World Use Cases of Network Segmentation 1. Healthcare – Protecting Patient Data and Devices Hospitals use segmentation to keep medical devices, patient records, and visitor Wi-Fi on separate zones. This prevents an infected guest device from interfering with critical systems. Result: Reduced attack surface and HIPAA compliance. 2. Manufacturing – Isolating Industrial Systems Production environments often have fragile legacy systems. Segmenting OTfrom IT ensures ransomware or malware doesn’t disrupt manufacturing lines. Result: More uptime and fewer operational risks. 3. Finance – Securing Payment Systems Banks and payment providers use segmentation to isolate cardholder data environmentsfrom the rest of the corporate network. This helps meet PCI DSS and keeps sensitive data protected. Result: Easier audits and stronger data security. 4. Education – Managing High-Volume BYOD Traffic Universities segment student Wi-Fi, research labs, and administrative systems. This keeps a vulnerable student device from spreading malware to faculty or internal systems. Result: Safer environment for open access campuses. 5. Cloud – Segmenting Apps and Microservices In the cloud, developers use security groups, VPCs, and IAM roles to isolate applications and limit who can access what. This reduces risk if one workload is compromised. Result: Controlled access and better cloud hygiene. Common Challenges Legacy Tech: Older devices may not support modern segmentation. Lack of Visibility: Hard to secure what you don’t know exists. Operational Hiccups: Poorly planned segmentation can block business workflows. Policy Complexity: Keeping access rules up to date across dynamic environments takes effort. Best Practices Start with High-Risk Areas: Prioritize zones handling sensitive data or vulnerable systems. Keep Documentation Updated: Maintain clear diagrams and policy records. Align Teams: Get buy-in from IT, security, and business units. Automate Where You Can: Especially for monitoring and policy enforcement. Review Regularly: Networks evolve- so should your segmentation. Final Thoughts Segmentation isn’t about creating walls it’s about building smart pathways. Done right, it helps you take control of your network, reduce risk, and respond faster when something goes wrong. It’s a foundational layer of cybersecurity that pays off in resilience, compliance, and peace of mind. About the Author: Prajwal Gowda is a cybersecurity expert with 10+ years of experience. He has built businesses and was a Business Unit Head for Compliance and Testing services. Currently, he is the Chief Technology Officer at Ampcus Cyber, leading the company’s technology strategy and innovation efforts. He has also been involved in the Payment Card Industry, Software Security Framework, ISO 27001 Controls Gap Analysis, ISMS, Risk Analysis, OCTAVE, ISO 27005, Information Security Audit and Network Security. Prajwal is a Master Trainer who has conducted 100+ cybersecurity training sessions worldwide. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #how #effectively #implement #network #segmentation
    TECHWORLDTIMES.COM
    How to Effectively Implement Network Segmentation: 5 Key Steps and Use Cases
    Posted on : June 3, 2025 By Tech World Times Technology  Rate this post This article walks you through five practical steps to implement network segmentation effectively, backed by real-world use cases that showcase its value in different industries. Networks are constantly expanding across offices, cloud services, remote users, and connected devices. With so many moving parts, security gaps can easily form. Once attackers breach a weak point, they often move freely across the network, targeting critical systems and sensitive data. That’s where network segmentation comes in. It’s a practical approach to divide your network into smaller, manageable zones to control access, limit exposure, and isolate threats before they spread. But simply deploying VLANs or access rules isn’t enough. True segmentation needs planning, alignment with your business, and the right mix of technology. Step 1: Assess and Map Your Current Network Start by figuring out what’s on your network and how it communicates. Inventory Devices and Applications: List all system servers, user machines, IoT devices, cloud assets. Map Data Flows: Understand how applications and services interact. Which systems talk to each other? What ports and protocols are used? Identify Critical Assets: Highlight the systems that handle sensitive data, such as payment processing, health records, or intellectual property. Tip: Network discovery tools or NAC solutions can automate asset inventory and reveal communication paths you might miss. Step 2: Define Segmentation Goals and Policies Once you understand your environment, it’s time to set your objectives. Security Objectives: Do you want to reduce lateral movement, isolate sensitive systems, or meet a compliance mandate? Business Alignment: Segment by business unit, sensitivity of data, or risk profile-whatever makes the most operational sense. Compliance Requirements: PCI DSS, HIPAA, and other standards often require network segmentation. Example: A healthcare provider might create separate zones for patient records, lab equipment, guest Wi-Fi, and billing systems. Step 3: Choose the Right Segmentation Method Segmentation can be done in several ways. The right approach depends on your infrastructure goals and types: a. Physical Segmentation Use separate routers, switches, and cables. This offers strong isolation but can be costly and harder to scale. b. Logical Segmentation (VLANs/Subnets) Group devices into virtual segments based on function or department. It’s efficient and easier to manage in most environments. c. Micro segmentation Control access at the workload or application level using software-defined policies. Ideal for cloud or virtualized environments where you need granular control. d. Cloud Segmentation In the cloud, segmentation happens using security groups, VPCs, and IAM roles to isolate workloads and define access rules. Use a combination- VLANs for broader segmentation and micro segmentation for finer control where it matters. Step 4: Implement Controls and Monitor Traffic Time to put those policies into action. Firewalls and ACLs: Use access controls to manage what can move between zones. Block anything that isn’t explicitly allowed. Zero Trust Principles: Never assume trust between segments. Always validate identity and permissions. Monitoring and Alerts: Use your SIEM, flow monitoring tools, or NDR platform to watch for unusual traffic or policy violations. Common Pitfall: Avoid “allow all” rules between segments, it defeats the purpose. Step 5: Test, Validate, and Fine-Tune Even a well-designed segmentation plan can have gaps. Regular validation helps ensure it works as expected. Penetration Testing: Simulate attacks to check if boundaries hold. Review Policies: Business needs to change your segmentation strategy too. Performance Monitoring: Make sure segmentation doesn’t impact legitimate operations or application performance. Automation tools can help simplify this process and ensure consistency. Real-World Use Cases of Network Segmentation 1. Healthcare – Protecting Patient Data and Devices Hospitals use segmentation to keep medical devices, patient records, and visitor Wi-Fi on separate zones. This prevents an infected guest device from interfering with critical systems. Result: Reduced attack surface and HIPAA compliance. 2. Manufacturing – Isolating Industrial Systems Production environments often have fragile legacy systems. Segmenting OT (Operational Technology) from IT ensures ransomware or malware doesn’t disrupt manufacturing lines. Result: More uptime and fewer operational risks. 3. Finance – Securing Payment Systems Banks and payment providers use segmentation to isolate cardholder data environments (CDE) from the rest of the corporate network. This helps meet PCI DSS and keeps sensitive data protected. Result: Easier audits and stronger data security. 4. Education – Managing High-Volume BYOD Traffic Universities segment student Wi-Fi, research labs, and administrative systems. This keeps a vulnerable student device from spreading malware to faculty or internal systems. Result: Safer environment for open access campuses. 5. Cloud – Segmenting Apps and Microservices In the cloud, developers use security groups, VPCs, and IAM roles to isolate applications and limit who can access what. This reduces risk if one workload is compromised. Result: Controlled access and better cloud hygiene. Common Challenges Legacy Tech: Older devices may not support modern segmentation. Lack of Visibility: Hard to secure what you don’t know exists. Operational Hiccups: Poorly planned segmentation can block business workflows. Policy Complexity: Keeping access rules up to date across dynamic environments takes effort. Best Practices Start with High-Risk Areas: Prioritize zones handling sensitive data or vulnerable systems. Keep Documentation Updated: Maintain clear diagrams and policy records. Align Teams: Get buy-in from IT, security, and business units. Automate Where You Can: Especially for monitoring and policy enforcement. Review Regularly: Networks evolve- so should your segmentation. Final Thoughts Segmentation isn’t about creating walls it’s about building smart pathways. Done right, it helps you take control of your network, reduce risk, and respond faster when something goes wrong. It’s a foundational layer of cybersecurity that pays off in resilience, compliance, and peace of mind. About the Author: Prajwal Gowda is a cybersecurity expert with 10+ years of experience. He has built businesses and was a Business Unit Head for Compliance and Testing services. Currently, he is the Chief Technology Officer at Ampcus Cyber, leading the company’s technology strategy and innovation efforts. He has also been involved in the Payment Card Industry, Software Security Framework, ISO 27001 Controls Gap Analysis, ISMS, Risk Analysis, OCTAVE, ISO 27005, Information Security Audit and Network Security. Prajwal is a Master Trainer who has conducted 100+ cybersecurity training sessions worldwide. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    0 Commenti 0 condivisioni