• Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    WWW.ILM.COM
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Commentaires 0 Parts
  • Racing Yacht CTO Sails to Success

    John Edwards, Technology Journalist & AuthorJune 5, 20254 Min ReadSailGP Australia, USA, and Great Britain racing on San Francisco Bay, CaliforniaDannaphotos via Alamy StockWarren Jones is CTO at SailGP, the organizer of what he describes as the world's most exciting race on water. The event features high-tech F50 boats that speed across the waves at 100 kilometers-per-hour.  Working in cooperation with Oracle, Jones focuses on innovative solutions for remote broadcast production, data management and distribution, and a newly introduced fan engagement platform. He also leads the team that has won an IBC Innovation Awards for its ambitious and ground-breaking remote production strategy. Among the races Jones organizes is the Rolex SailGP Championship, a global competition featuring national teams battling each other in identical high-tech, high-speed 50-foot foiling catamarans at celebrated venues around the world. The event attracts the sport's top athletes, with national pride, personal glory, and bonus prize money of million at stake. Jones also supports event and office infrastructures in London and New York, and at each of the global grand prix events over the course of the season. Prior to joining SailGP, he was IT leader at the America's Cup Event Authority and Oracle Racing. In an online interview, Jones discusses the challenges he faces in bringing reliable data services to event vessels, as well as onshore officials and fans. Related:Warren JonesWhat's the biggest challenge you've faced during your tenure? One of the biggest challenges I faced was ensuring real-time data transmission from our high-performance F50 foiling catamarans to teams, broadcasters, and fans worldwide. SailGP relies heavily on technology to deliver high-speed racing insights, but ensuring seamless connectivity across different venues with variable conditions was a significant hurdle. What caused the problem? The challenge arose due to a combination of factors. The high speeds and dynamic nature of the boats made data capture and transmission difficult. Varying network infrastructure at different race locations created connectivity issues. The need to process and visualize massive amounts of data in real time placed immense pressure on our systems. How did you resolve the problem? We tackled the issue by working with T-Mobile and Ericsson in a robust and adaptive telemetry system capable of transmitting data with minimal latency over 5G. Deploying custom-built race management software that could process and distribute data efficiently. Working closely with our global partner Oracle, we optimized Cloud Compute with the Oracle Cloud.  Related:What would have happened if the problem wasn't quickly resolved? Spectator experience would have suffered. Teams rely on real-time analytics for performance optimization, and broadcasters need accurate telemetry for storytelling. A failure here could have resulted in delays, miscommunication, and a diminished fan experience. How long did it take to resolve the problem? It was an ongoing challenge that required continuous innovation. The initial solution took several months to implement, but we’ve refined and improved it over multiple seasons as technology advances and new challenges emerge. Who supported you during this challenge? This was a team effort -- with our partners Oracle, T-Mobile, and Ericsson with our in-house engineers, data scientists, and IT specialists all working closely. The support from SailGP's leadership was also crucial in securing the necessary resources. Did anyone let you down? Rather than seeing it as being let down, I'd say there were unexpected challenges with some technology providers who underestimated the complexity of what we needed. However, we adapted by seeking alternative solutions and working collaboratively to overcome the hurdles. What advice do you have for other leaders who may face a similar challenge? Related:Embrace adaptability. No matter how well you plan, unforeseen challenges will arise, so build flexible solutions. Leverage partnerships. Collaborate with the best in the industry to ensure you have the expertise needed. Stay ahead of technology trends. The landscape is constantly evolving; being proactive rather than reactive is key. Prioritize resilience. Build redundancy into critical systems to ensure continuity even in the face of disruptions. Is there anything else you would like to add? SailGP is as much a technology company as it is a sports league. The intersection of innovation and competition drives us forward and solving challenges like these is what makes this role both demanding and incredibly rewarding. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    #racing #yacht #cto #sails #success
    Racing Yacht CTO Sails to Success
    John Edwards, Technology Journalist & AuthorJune 5, 20254 Min ReadSailGP Australia, USA, and Great Britain racing on San Francisco Bay, CaliforniaDannaphotos via Alamy StockWarren Jones is CTO at SailGP, the organizer of what he describes as the world's most exciting race on water. The event features high-tech F50 boats that speed across the waves at 100 kilometers-per-hour.  Working in cooperation with Oracle, Jones focuses on innovative solutions for remote broadcast production, data management and distribution, and a newly introduced fan engagement platform. He also leads the team that has won an IBC Innovation Awards for its ambitious and ground-breaking remote production strategy. Among the races Jones organizes is the Rolex SailGP Championship, a global competition featuring national teams battling each other in identical high-tech, high-speed 50-foot foiling catamarans at celebrated venues around the world. The event attracts the sport's top athletes, with national pride, personal glory, and bonus prize money of million at stake. Jones also supports event and office infrastructures in London and New York, and at each of the global grand prix events over the course of the season. Prior to joining SailGP, he was IT leader at the America's Cup Event Authority and Oracle Racing. In an online interview, Jones discusses the challenges he faces in bringing reliable data services to event vessels, as well as onshore officials and fans. Related:Warren JonesWhat's the biggest challenge you've faced during your tenure? One of the biggest challenges I faced was ensuring real-time data transmission from our high-performance F50 foiling catamarans to teams, broadcasters, and fans worldwide. SailGP relies heavily on technology to deliver high-speed racing insights, but ensuring seamless connectivity across different venues with variable conditions was a significant hurdle. What caused the problem? The challenge arose due to a combination of factors. The high speeds and dynamic nature of the boats made data capture and transmission difficult. Varying network infrastructure at different race locations created connectivity issues. The need to process and visualize massive amounts of data in real time placed immense pressure on our systems. How did you resolve the problem? We tackled the issue by working with T-Mobile and Ericsson in a robust and adaptive telemetry system capable of transmitting data with minimal latency over 5G. Deploying custom-built race management software that could process and distribute data efficiently. Working closely with our global partner Oracle, we optimized Cloud Compute with the Oracle Cloud.  Related:What would have happened if the problem wasn't quickly resolved? Spectator experience would have suffered. Teams rely on real-time analytics for performance optimization, and broadcasters need accurate telemetry for storytelling. A failure here could have resulted in delays, miscommunication, and a diminished fan experience. How long did it take to resolve the problem? It was an ongoing challenge that required continuous innovation. The initial solution took several months to implement, but we’ve refined and improved it over multiple seasons as technology advances and new challenges emerge. Who supported you during this challenge? This was a team effort -- with our partners Oracle, T-Mobile, and Ericsson with our in-house engineers, data scientists, and IT specialists all working closely. The support from SailGP's leadership was also crucial in securing the necessary resources. Did anyone let you down? Rather than seeing it as being let down, I'd say there were unexpected challenges with some technology providers who underestimated the complexity of what we needed. However, we adapted by seeking alternative solutions and working collaboratively to overcome the hurdles. What advice do you have for other leaders who may face a similar challenge? Related:Embrace adaptability. No matter how well you plan, unforeseen challenges will arise, so build flexible solutions. Leverage partnerships. Collaborate with the best in the industry to ensure you have the expertise needed. Stay ahead of technology trends. The landscape is constantly evolving; being proactive rather than reactive is key. Prioritize resilience. Build redundancy into critical systems to ensure continuity even in the face of disruptions. Is there anything else you would like to add? SailGP is as much a technology company as it is a sports league. The intersection of innovation and competition drives us forward and solving challenges like these is what makes this role both demanding and incredibly rewarding. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #racing #yacht #cto #sails #success
    WWW.INFORMATIONWEEK.COM
    Racing Yacht CTO Sails to Success
    John Edwards, Technology Journalist & AuthorJune 5, 20254 Min ReadSailGP Australia, USA, and Great Britain racing on San Francisco Bay, CaliforniaDannaphotos via Alamy StockWarren Jones is CTO at SailGP, the organizer of what he describes as the world's most exciting race on water. The event features high-tech F50 boats that speed across the waves at 100 kilometers-per-hour (62 miles-per-hour).  Working in cooperation with Oracle, Jones focuses on innovative solutions for remote broadcast production, data management and distribution, and a newly introduced fan engagement platform. He also leads the team that has won an IBC Innovation Awards for its ambitious and ground-breaking remote production strategy. Among the races Jones organizes is the Rolex SailGP Championship, a global competition featuring national teams battling each other in identical high-tech, high-speed 50-foot foiling catamarans at celebrated venues around the world. The event attracts the sport's top athletes, with national pride, personal glory, and bonus prize money of $12.8 million at stake. Jones also supports event and office infrastructures in London and New York, and at each of the global grand prix events over the course of the season. Prior to joining SailGP, he was IT leader at the America's Cup Event Authority and Oracle Racing. In an online interview, Jones discusses the challenges he faces in bringing reliable data services to event vessels, as well as onshore officials and fans. Related:Warren JonesWhat's the biggest challenge you've faced during your tenure? One of the biggest challenges I faced was ensuring real-time data transmission from our high-performance F50 foiling catamarans to teams, broadcasters, and fans worldwide. SailGP relies heavily on technology to deliver high-speed racing insights, but ensuring seamless connectivity across different venues with variable conditions was a significant hurdle. What caused the problem? The challenge arose due to a combination of factors. The high speeds and dynamic nature of the boats made data capture and transmission difficult. Varying network infrastructure at different race locations created connectivity issues. The need to process and visualize massive amounts of data in real time placed immense pressure on our systems. How did you resolve the problem? We tackled the issue by working with T-Mobile and Ericsson in a robust and adaptive telemetry system capable of transmitting data with minimal latency over 5G. Deploying custom-built race management software that could process and distribute data efficiently [was also important]. Working closely with our global partner Oracle, we optimized Cloud Compute with the Oracle Cloud.  Related:What would have happened if the problem wasn't quickly resolved? Spectator experience would have suffered. Teams rely on real-time analytics for performance optimization, and broadcasters need accurate telemetry for storytelling. A failure here could have resulted in delays, miscommunication, and a diminished fan experience. How long did it take to resolve the problem? It was an ongoing challenge that required continuous innovation. The initial solution took several months to implement, but we’ve refined and improved it over multiple seasons as technology advances and new challenges emerge. Who supported you during this challenge? This was a team effort -- with our partners Oracle, T-Mobile, and Ericsson with our in-house engineers, data scientists, and IT specialists all working closely. The support from SailGP's leadership was also crucial in securing the necessary resources. Did anyone let you down? Rather than seeing it as being let down, I'd say there were unexpected challenges with some technology providers who underestimated the complexity of what we needed. However, we adapted by seeking alternative solutions and working collaboratively to overcome the hurdles. What advice do you have for other leaders who may face a similar challenge? Related:Embrace adaptability. No matter how well you plan, unforeseen challenges will arise, so build flexible solutions. Leverage partnerships. Collaborate with the best in the industry to ensure you have the expertise needed. Stay ahead of technology trends. The landscape is constantly evolving; being proactive rather than reactive is key. Prioritize resilience. Build redundancy into critical systems to ensure continuity even in the face of disruptions. Is there anything else you would like to add? SailGP is as much a technology company as it is a sports league. The intersection of innovation and competition drives us forward and solving challenges like these is what makes this role both demanding and incredibly rewarding. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    Like
    Love
    Wow
    Sad
    Angry
    349
    0 Commentaires 0 Parts
  • Catch This Year's Strawberry Moon Lighting Up the Sky on June 11

    Catch This Year’s Strawberry Moon Lighting Up the Sky on June 11
    The last full moon before summer kicks off is one of the lowest of the year in the Northern Hemisphere

    A full moon on June 28, 2018, as seen from Manchester, England. The reddish glow is likely due to the 2018 Saddleworth Moor wildfires. 
    Benjamin Shaw, CC BY-SA 4.0 via Wikimedia Commons

    Summer will officially begin with this year's solstice on June 20. And on Wednesday, June 11, comes a "strawberry moon," the last full moon of the Northern Hemisphere's spring. It will be at its brightest at 3:44 a.m. Eastern time in the United States.
    Its name, however, isn't related to the moon's color. The Old Farmer’s Almanac, which has charted everything from celestial bodies to the best time to plant vegetables since 1792, popularized useful nicknames for every month's full moon. According to the almanac, the name strawberry moon has been used by Native peoples, such as the Algonquian, Ojibwe, Dakota and Lakota, to mark the harvest time of “June-bearing” strawberries. "Mead moon" or "honey moon" are old European nicknames for June's full moon, according to National Geographic, and may have similarly been inspired by honey harvesting.
    In the Northern Hemisphere, the strawberry moon is one of the lowest full moons of the year. That's because June's full moon usually takes place closest to the summer solstice, which is when the Earth is in the lowest point of its tilted orbit around the sun, and thus the sun appears at its peak height in our skies. Full moons occur when they are opposite the sun in respect to Earth, so if the sun is in its highest point, the moon is in its lowest, as reported by Live Science's Jamie Carter. Earth will reach its aphelion—or the farthest point in its elliptical orbit around the sun—on July 3, making the strawberry moon one of the farthest full moons from our star, per Live Science.

    Earth's summer and winter solstices

    NASA

    Because of its position in the sky, June's full moon may live up to its nickname by appearing more colorful. According to NASA, when the moon hangs low, it "tends to have a more yellow or orange hue" than when it's high because its light has to travel through a thicker portion of the atmosphere to reach our view. This means a greater number of long red wavelengths survive the journey than short blue ones. Pollution, dust or wildfires can also make the moon appear more red.
    The strawberry moon is distinct from the blood moon, however, notes Fox61's Krys Shahin. Blood moons—like the one that graced our skies in March—occur during total lunar eclipses, when the sun, the Earth and the moon line up in a way that makes the Earth block most of the sun's light from reaching the moon. The light that manages to seep around our planet and still reach the moon has to filter through our atmosphere, meaning mostly red wavelengths make it through once again.
    Though the strawberry moon will reach its peak early Wednesday morning, the best time to see it will be when it rises over the horizon at dusk on Tuesday evening, per Live Science. As reported by Discover Magazine's Stephanie Edwards, Mars will also be visible on June 11.

    Get the latest stories in your inbox every weekday.

    More about:
    Moon
    Sky Watching Guide
    Sun
    #catch #this #year039s #strawberry #moon
    Catch This Year's Strawberry Moon Lighting Up the Sky on June 11
    Catch This Year’s Strawberry Moon Lighting Up the Sky on June 11 The last full moon before summer kicks off is one of the lowest of the year in the Northern Hemisphere A full moon on June 28, 2018, as seen from Manchester, England. The reddish glow is likely due to the 2018 Saddleworth Moor wildfires.  Benjamin Shaw, CC BY-SA 4.0 via Wikimedia Commons Summer will officially begin with this year's solstice on June 20. And on Wednesday, June 11, comes a "strawberry moon," the last full moon of the Northern Hemisphere's spring. It will be at its brightest at 3:44 a.m. Eastern time in the United States. Its name, however, isn't related to the moon's color. The Old Farmer’s Almanac, which has charted everything from celestial bodies to the best time to plant vegetables since 1792, popularized useful nicknames for every month's full moon. According to the almanac, the name strawberry moon has been used by Native peoples, such as the Algonquian, Ojibwe, Dakota and Lakota, to mark the harvest time of “June-bearing” strawberries. "Mead moon" or "honey moon" are old European nicknames for June's full moon, according to National Geographic, and may have similarly been inspired by honey harvesting. In the Northern Hemisphere, the strawberry moon is one of the lowest full moons of the year. That's because June's full moon usually takes place closest to the summer solstice, which is when the Earth is in the lowest point of its tilted orbit around the sun, and thus the sun appears at its peak height in our skies. Full moons occur when they are opposite the sun in respect to Earth, so if the sun is in its highest point, the moon is in its lowest, as reported by Live Science's Jamie Carter. Earth will reach its aphelion—or the farthest point in its elliptical orbit around the sun—on July 3, making the strawberry moon one of the farthest full moons from our star, per Live Science. Earth's summer and winter solstices NASA Because of its position in the sky, June's full moon may live up to its nickname by appearing more colorful. According to NASA, when the moon hangs low, it "tends to have a more yellow or orange hue" than when it's high because its light has to travel through a thicker portion of the atmosphere to reach our view. This means a greater number of long red wavelengths survive the journey than short blue ones. Pollution, dust or wildfires can also make the moon appear more red. The strawberry moon is distinct from the blood moon, however, notes Fox61's Krys Shahin. Blood moons—like the one that graced our skies in March—occur during total lunar eclipses, when the sun, the Earth and the moon line up in a way that makes the Earth block most of the sun's light from reaching the moon. The light that manages to seep around our planet and still reach the moon has to filter through our atmosphere, meaning mostly red wavelengths make it through once again. Though the strawberry moon will reach its peak early Wednesday morning, the best time to see it will be when it rises over the horizon at dusk on Tuesday evening, per Live Science. As reported by Discover Magazine's Stephanie Edwards, Mars will also be visible on June 11. Get the latest stories in your inbox every weekday. More about: Moon Sky Watching Guide Sun #catch #this #year039s #strawberry #moon
    WWW.SMITHSONIANMAG.COM
    Catch This Year's Strawberry Moon Lighting Up the Sky on June 11
    Catch This Year’s Strawberry Moon Lighting Up the Sky on June 11 The last full moon before summer kicks off is one of the lowest of the year in the Northern Hemisphere A full moon on June 28, 2018, as seen from Manchester, England. The reddish glow is likely due to the 2018 Saddleworth Moor wildfires.  Benjamin Shaw, CC BY-SA 4.0 via Wikimedia Commons Summer will officially begin with this year's solstice on June 20. And on Wednesday, June 11, comes a "strawberry moon," the last full moon of the Northern Hemisphere's spring. It will be at its brightest at 3:44 a.m. Eastern time in the United States. Its name, however, isn't related to the moon's color. The Old Farmer’s Almanac, which has charted everything from celestial bodies to the best time to plant vegetables since 1792, popularized useful nicknames for every month's full moon. According to the almanac, the name strawberry moon has been used by Native peoples, such as the Algonquian, Ojibwe, Dakota and Lakota, to mark the harvest time of “June-bearing” strawberries. "Mead moon" or "honey moon" are old European nicknames for June's full moon, according to National Geographic, and may have similarly been inspired by honey harvesting. In the Northern Hemisphere, the strawberry moon is one of the lowest full moons of the year. That's because June's full moon usually takes place closest to the summer solstice, which is when the Earth is in the lowest point of its tilted orbit around the sun, and thus the sun appears at its peak height in our skies. Full moons occur when they are opposite the sun in respect to Earth, so if the sun is in its highest point, the moon is in its lowest, as reported by Live Science's Jamie Carter. Earth will reach its aphelion—or the farthest point in its elliptical orbit around the sun—on July 3, making the strawberry moon one of the farthest full moons from our star, per Live Science. Earth's summer and winter solstices NASA Because of its position in the sky, June's full moon may live up to its nickname by appearing more colorful. According to NASA, when the moon hangs low, it "tends to have a more yellow or orange hue" than when it's high because its light has to travel through a thicker portion of the atmosphere to reach our view. This means a greater number of long red wavelengths survive the journey than short blue ones. Pollution, dust or wildfires can also make the moon appear more red. The strawberry moon is distinct from the blood moon, however, notes Fox61's Krys Shahin. Blood moons—like the one that graced our skies in March—occur during total lunar eclipses, when the sun, the Earth and the moon line up in a way that makes the Earth block most of the sun's light from reaching the moon. The light that manages to seep around our planet and still reach the moon has to filter through our atmosphere, meaning mostly red wavelengths make it through once again. Though the strawberry moon will reach its peak early Wednesday morning, the best time to see it will be when it rises over the horizon at dusk on Tuesday evening, per Live Science. As reported by Discover Magazine's Stephanie Edwards, Mars will also be visible on June 11. Get the latest stories in your inbox every weekday. More about: Moon Sky Watching Guide Sun
    Like
    Love
    Wow
    Sad
    Angry
    442
    0 Commentaires 0 Parts
  • Endangered classic Mac plastic color returns as 3D-printer filament

    The color of nostalgia

    Endangered classic Mac plastic color returns as 3D-printer filament

    Mac fan paid to color-match iconic Apple beige-gray "Platinum" plastic for everyone.

    Benj Edwards



    Jun 4, 2025 6:13 pm

    |

    3

    The Mac SE, released in 1987, was one of many classic Macs to use the "Platinum" color scheme.

    Credit:

    Apple / Polar Filament

    The Mac SE, released in 1987, was one of many classic Macs to use the "Platinum" color scheme.

    Credit:

    Apple / Polar Filament

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    On Tuesday, classic computer collector Joe Strosnider announced the availability of a new 3D-printer filament that replicates the iconic "Platinum" color scheme used in classic Macintosh computers from the late 1980s through the 1990s. The PLA filamentallows hobbyists to 3D-print nostalgic novelties, replacement parts, and accessories that match the original color of vintage Apple computers.
    Hobbyists commonly feed this type of filament into commercial desktop 3D printers, which heat the plastic and extrude it in a computer-controlled way to fabricate new plastic parts.
    The Platinum color, which Apple used in its desktop and portable computer lines starting with the Apple IIgs in 1986, has become synonymous with a distinctive era of classic Macintosh aesthetic. Over time, original Macintosh plastics have become brittle and discolored with age, so matching the "original" color can be a somewhat challenging and subjective experience.

    A close-up of "Retro Platinum" PLA filament by Polar Filament.

    Credit:

    Polar Filament

    Strosnider, who runs a website about his extensive vintage computer collection in Ohio, worked for years to color-match the distinctive beige-gray hue of the Macintosh Platinum scheme, resulting in a spool of hobby-ready plastic by Polar Filament and priced at per kilogram.
    According to a forum post, Strosnider paid approximately to develop the color and purchase an initial 25-kilogram supply of the filament. Rather than keeping the formulation proprietary, he arranged for Polar Filament to make the color publicly available.
    "I paid them a fee to color match the speaker box from inside my Mac Color Classic," Strosnider wrote in a Tinkerdifferent forum post on Tuesday. "In exchange, I asked them to release the color to the public so anyone can use it."

    A spool of "Retro Platinum" PLA filament by Polar Filament.

    Credit:

    Polar Filament

    The development addresses a gap in the vintage computing community, where enthusiasts sometimes struggle to find appropriately colored materials for restoration projects and new accessories. The new filament is an attempt to replace previous options that were either expensive, required international shipping, or had consistency issues that Strosnider described as "chalky."
    The 1.75 mm filament works with standard 3D printers and is compatible with automated material systems used in some newer printer models. On Bluesky, Strosnider encouraged buyers to "order plenty, and let them know you want them to print it forever" to ensure continued production of the specialty color.
    Extruded nostalgia
    The timing of the filament's release coincides with growing interest in 3D-printed cases and accessories for vintage computer hardware. One example is the SE Mini desktop case, a project by "GutBomb" that transforms Macintosh SE and SE/30 logic boards into compact desktop computers that can connect to modern displays. The case, designed to be 3D-printed in multiple pieces and assembled, represents the type of project that benefits from color-accurate filament.

    A 3D-printed "SE Mini" desktop case that allows using a vintage compact Mac board in a new enclosure.

    Credit:

    Joe Strosnider

    The SE Mini case requires approximately half a spool of filament and takes a couple of days to print on consumer 3D printers. Users can outfit the case with modern components, such as Pico PSUs and BlueSCSI storage devices, while maintaining the classic Macintosh appearance.
    Why create new "retro" devices? Because it's fun, and it's a great way to merge technology's past with the benefits of recent tech developments. Projects like the Platinum PLA filament, the SE Mini case, and the dedication of hobbyists like Strosnider ensure that appreciation for Apple's computers of yore will continue for decades.

    Benj Edwards
    Senior AI Reporter

    Benj Edwards
    Senior AI Reporter

    Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

    3 Comments
    #endangered #classic #mac #plastic #color
    Endangered classic Mac plastic color returns as 3D-printer filament
    The color of nostalgia Endangered classic Mac plastic color returns as 3D-printer filament Mac fan paid to color-match iconic Apple beige-gray "Platinum" plastic for everyone. Benj Edwards – Jun 4, 2025 6:13 pm | 3 The Mac SE, released in 1987, was one of many classic Macs to use the "Platinum" color scheme. Credit: Apple / Polar Filament The Mac SE, released in 1987, was one of many classic Macs to use the "Platinum" color scheme. Credit: Apple / Polar Filament Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more On Tuesday, classic computer collector Joe Strosnider announced the availability of a new 3D-printer filament that replicates the iconic "Platinum" color scheme used in classic Macintosh computers from the late 1980s through the 1990s. The PLA filamentallows hobbyists to 3D-print nostalgic novelties, replacement parts, and accessories that match the original color of vintage Apple computers. Hobbyists commonly feed this type of filament into commercial desktop 3D printers, which heat the plastic and extrude it in a computer-controlled way to fabricate new plastic parts. The Platinum color, which Apple used in its desktop and portable computer lines starting with the Apple IIgs in 1986, has become synonymous with a distinctive era of classic Macintosh aesthetic. Over time, original Macintosh plastics have become brittle and discolored with age, so matching the "original" color can be a somewhat challenging and subjective experience. A close-up of "Retro Platinum" PLA filament by Polar Filament. Credit: Polar Filament Strosnider, who runs a website about his extensive vintage computer collection in Ohio, worked for years to color-match the distinctive beige-gray hue of the Macintosh Platinum scheme, resulting in a spool of hobby-ready plastic by Polar Filament and priced at per kilogram. According to a forum post, Strosnider paid approximately to develop the color and purchase an initial 25-kilogram supply of the filament. Rather than keeping the formulation proprietary, he arranged for Polar Filament to make the color publicly available. "I paid them a fee to color match the speaker box from inside my Mac Color Classic," Strosnider wrote in a Tinkerdifferent forum post on Tuesday. "In exchange, I asked them to release the color to the public so anyone can use it." A spool of "Retro Platinum" PLA filament by Polar Filament. Credit: Polar Filament The development addresses a gap in the vintage computing community, where enthusiasts sometimes struggle to find appropriately colored materials for restoration projects and new accessories. The new filament is an attempt to replace previous options that were either expensive, required international shipping, or had consistency issues that Strosnider described as "chalky." The 1.75 mm filament works with standard 3D printers and is compatible with automated material systems used in some newer printer models. On Bluesky, Strosnider encouraged buyers to "order plenty, and let them know you want them to print it forever" to ensure continued production of the specialty color. Extruded nostalgia The timing of the filament's release coincides with growing interest in 3D-printed cases and accessories for vintage computer hardware. One example is the SE Mini desktop case, a project by "GutBomb" that transforms Macintosh SE and SE/30 logic boards into compact desktop computers that can connect to modern displays. The case, designed to be 3D-printed in multiple pieces and assembled, represents the type of project that benefits from color-accurate filament. A 3D-printed "SE Mini" desktop case that allows using a vintage compact Mac board in a new enclosure. Credit: Joe Strosnider The SE Mini case requires approximately half a spool of filament and takes a couple of days to print on consumer 3D printers. Users can outfit the case with modern components, such as Pico PSUs and BlueSCSI storage devices, while maintaining the classic Macintosh appearance. Why create new "retro" devices? Because it's fun, and it's a great way to merge technology's past with the benefits of recent tech developments. Projects like the Platinum PLA filament, the SE Mini case, and the dedication of hobbyists like Strosnider ensure that appreciation for Apple's computers of yore will continue for decades. Benj Edwards Senior AI Reporter Benj Edwards Senior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 3 Comments #endangered #classic #mac #plastic #color
    ARSTECHNICA.COM
    Endangered classic Mac plastic color returns as 3D-printer filament
    The color of nostalgia Endangered classic Mac plastic color returns as 3D-printer filament Mac fan paid $900 to color-match iconic Apple beige-gray "Platinum" plastic for everyone. Benj Edwards – Jun 4, 2025 6:13 pm | 3 The Mac SE, released in 1987, was one of many classic Macs to use the "Platinum" color scheme. Credit: Apple / Polar Filament The Mac SE, released in 1987, was one of many classic Macs to use the "Platinum" color scheme. Credit: Apple / Polar Filament Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more On Tuesday, classic computer collector Joe Strosnider announced the availability of a new 3D-printer filament that replicates the iconic "Platinum" color scheme used in classic Macintosh computers from the late 1980s through the 1990s. The PLA filament (PLA is short for polylactic acid) allows hobbyists to 3D-print nostalgic novelties, replacement parts, and accessories that match the original color of vintage Apple computers. Hobbyists commonly feed this type of filament into commercial desktop 3D printers, which heat the plastic and extrude it in a computer-controlled way to fabricate new plastic parts. The Platinum color, which Apple used in its desktop and portable computer lines starting with the Apple IIgs in 1986, has become synonymous with a distinctive era of classic Macintosh aesthetic. Over time, original Macintosh plastics have become brittle and discolored with age, so matching the "original" color can be a somewhat challenging and subjective experience. A close-up of "Retro Platinum" PLA filament by Polar Filament. Credit: Polar Filament Strosnider, who runs a website about his extensive vintage computer collection in Ohio, worked for years to color-match the distinctive beige-gray hue of the Macintosh Platinum scheme, resulting in a spool of hobby-ready plastic by Polar Filament and priced at $21.99 per kilogram. According to a forum post, Strosnider paid approximately $900 to develop the color and purchase an initial 25-kilogram supply of the filament. Rather than keeping the formulation proprietary, he arranged for Polar Filament to make the color publicly available. "I paid them a fee to color match the speaker box from inside my Mac Color Classic," Strosnider wrote in a Tinkerdifferent forum post on Tuesday. "In exchange, I asked them to release the color to the public so anyone can use it." A spool of "Retro Platinum" PLA filament by Polar Filament. Credit: Polar Filament The development addresses a gap in the vintage computing community, where enthusiasts sometimes struggle to find appropriately colored materials for restoration projects and new accessories. The new filament is an attempt to replace previous options that were either expensive, required international shipping, or had consistency issues that Strosnider described as "chalky." The 1.75 mm filament works with standard 3D printers and is compatible with automated material systems used in some newer printer models. On Bluesky, Strosnider encouraged buyers to "order plenty, and let them know you want them to print it forever" to ensure continued production of the specialty color. Extruded nostalgia The timing of the filament's release coincides with growing interest in 3D-printed cases and accessories for vintage computer hardware. One example is the SE Mini desktop case, a project by "GutBomb" that transforms Macintosh SE and SE/30 logic boards into compact desktop computers that can connect to modern displays. The case, designed to be 3D-printed in multiple pieces and assembled, represents the type of project that benefits from color-accurate filament. A 3D-printed "SE Mini" desktop case that allows using a vintage compact Mac board in a new enclosure. Credit: Joe Strosnider The SE Mini case requires approximately half a spool of filament and takes a couple of days to print on consumer 3D printers. Users can outfit the case with modern components, such as Pico PSUs and BlueSCSI storage devices, while maintaining the classic Macintosh appearance. Why create new "retro" devices? Because it's fun, and it's a great way to merge technology's past with the benefits of recent tech developments. Projects like the Platinum PLA filament, the SE Mini case, and the dedication of hobbyists like Strosnider ensure that appreciation for Apple's computers of yore will continue for decades. Benj Edwards Senior AI Reporter Benj Edwards Senior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 3 Comments
    Like
    Love
    Wow
    Sad
    Angry
    249
    0 Commentaires 0 Parts
  • How to Convince Management Colleagues That AI Isn't a Passing Fad

    John Edwards, Technology Journalist & AuthorJune 4, 20254 Min ReadRancz Andrei via Alamy Stock PhotoIt may be hard to believe, but some senior executives actually believe that AI's arrival isn't a ground-shaking event. These individuals tend to be convinced that while AI may be a useful tool in certain situations, it's not going to change business in any truly meaningful way. Call them skeptics or call them realists, but such individuals really do exist, and it's the enterprise's CIOs and other IT leaders who need to gently guide them into reality. AI adoption tends to fall into three mindsets: early adopters who recognize its benefits, skeptics who fear its risks, and a large middle group -- those who are curious, but uncertain, observes Dave McQuarrie, HP's chief commercial officer in an online interview. "The key to closing the AI adoption gap lies in engaging this middle group, equipping them with knowledge, and guiding them through practical implementation." Effective Approaches The most important move is simply getting started. Establish a group of advocates in your company to serve as your early AI adopters, McQuarrie says. "Pick two or three processes to completely automate rather than casting a wide net, and use these as case studies to learn from," he advises. "By beginning with a subset of users, leaders can develop a solid foundation as they roll out the tool more widely across their business." Related:Start small, gather data, and present your use case, demonstrating how AI can support you and your colleagues to do your jobs better and faster, recommends Nicola Cain, CEO and principal consultant at Handley Gill Limited, a UK-based legal, regulatory and compliance consultancy. "This could be by analyzing customer interactions to demonstrate how the introduction of a chatbot to give customers prompt answers to easily addressed questions ... or showing how vast volumes of network log data could be analyzed by AI to identify potentially malign incidents that warrant further investigation," she says in an email interview. Changing Mindsets Question the skeptical leader about their biggest business bottleneck, suggests Jeff Mains, CEO of business consulting firm Champion Leadership Group. "Whether it’s slow decision-making, inconsistent customer experiences, or operational inefficiencies, there's a strategic AI-driven solution for nearly every major business challenge," he explains in an online interview. "The key is showing leaders how AI directly solves their most pressing problems today." When dealing with a reluctant executive, start by identifying an AI use case, Cain says. "AI functionality already performs strongly in areas like forecasting, recognition, event detection, personalization, interaction support, recommendations, and goal-driven optimization," she states. "Good business areas to identify a potential use case could therefore be in finance, customer service, marketing, cyber security, or stock control." Related:Strengthening Your Case Executives respond to proof, not promises, Mains says. "Instead of leading with research reports, I’ve found that real, industry-specific case studies are far more impactful," he observes. "If a direct competitor has successfully integrated AI into sales, marketing, or operations, use that example, because it creates urgency." Instead of just citing AI-driven efficiency gains, Mains recommends framing AI as a way to free-up leadership to focus on high-level strategy rather than day-to-day operations. Instead of trying to pitch AI in broad terms, Mains advises aligning the technology to the company's stated goals. "If the company is struggling with customer retention, talk about how AI can improve personalization," he suggests. "If operational inefficiencies are a problem, highlight AI-driven automation." The moment AI is framed as a business enabler rather than a technology trend, the conversation shifts from resistance to curiosity. Related:When All Else Fails If leadership refuses to embrace AI, it’s important to document the cost of inaction, Mains says. "Keep track of inefficiencies, missed opportunities, and competitor advancements," he recommends. Sometimes, leadership only shifts when management’s view of the risks of staying stagnant outweigh the risks of change. "If a company refuses to innovate despite clear benefits, that’s a red flag for long-term growth." Final Thoughts For enterprises that have so far done little or nothing in the way of AI deployment, the technology may appear optional, McQuarrie observes. Yet soon, operating without AI will become as unthinkable as running a business without the internet. Enterprise leaders who delay AI adoption risk falling behind the competition. "The best approach is to embrace a mindset of humility and curiosity -- actively seek out knowledge, ask questions, and learn from peers who are already seeing AI’s impact," he says. "To stay competitive in this rapidly evolving landscape, leaders should start now." The best companies aren't just using AI to improve; they're using the technology to redefine how they do business, Mains says. Leaders who recognize AI as a business accelerator will be the ones leading their industries in the next decade. "Those who hesitate? They’ll be playing catch-up." he concludes. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    #how #convince #management #colleagues #that
    How to Convince Management Colleagues That AI Isn't a Passing Fad
    John Edwards, Technology Journalist & AuthorJune 4, 20254 Min ReadRancz Andrei via Alamy Stock PhotoIt may be hard to believe, but some senior executives actually believe that AI's arrival isn't a ground-shaking event. These individuals tend to be convinced that while AI may be a useful tool in certain situations, it's not going to change business in any truly meaningful way. Call them skeptics or call them realists, but such individuals really do exist, and it's the enterprise's CIOs and other IT leaders who need to gently guide them into reality. AI adoption tends to fall into three mindsets: early adopters who recognize its benefits, skeptics who fear its risks, and a large middle group -- those who are curious, but uncertain, observes Dave McQuarrie, HP's chief commercial officer in an online interview. "The key to closing the AI adoption gap lies in engaging this middle group, equipping them with knowledge, and guiding them through practical implementation." Effective Approaches The most important move is simply getting started. Establish a group of advocates in your company to serve as your early AI adopters, McQuarrie says. "Pick two or three processes to completely automate rather than casting a wide net, and use these as case studies to learn from," he advises. "By beginning with a subset of users, leaders can develop a solid foundation as they roll out the tool more widely across their business." Related:Start small, gather data, and present your use case, demonstrating how AI can support you and your colleagues to do your jobs better and faster, recommends Nicola Cain, CEO and principal consultant at Handley Gill Limited, a UK-based legal, regulatory and compliance consultancy. "This could be by analyzing customer interactions to demonstrate how the introduction of a chatbot to give customers prompt answers to easily addressed questions ... or showing how vast volumes of network log data could be analyzed by AI to identify potentially malign incidents that warrant further investigation," she says in an email interview. Changing Mindsets Question the skeptical leader about their biggest business bottleneck, suggests Jeff Mains, CEO of business consulting firm Champion Leadership Group. "Whether it’s slow decision-making, inconsistent customer experiences, or operational inefficiencies, there's a strategic AI-driven solution for nearly every major business challenge," he explains in an online interview. "The key is showing leaders how AI directly solves their most pressing problems today." When dealing with a reluctant executive, start by identifying an AI use case, Cain says. "AI functionality already performs strongly in areas like forecasting, recognition, event detection, personalization, interaction support, recommendations, and goal-driven optimization," she states. "Good business areas to identify a potential use case could therefore be in finance, customer service, marketing, cyber security, or stock control." Related:Strengthening Your Case Executives respond to proof, not promises, Mains says. "Instead of leading with research reports, I’ve found that real, industry-specific case studies are far more impactful," he observes. "If a direct competitor has successfully integrated AI into sales, marketing, or operations, use that example, because it creates urgency." Instead of just citing AI-driven efficiency gains, Mains recommends framing AI as a way to free-up leadership to focus on high-level strategy rather than day-to-day operations. Instead of trying to pitch AI in broad terms, Mains advises aligning the technology to the company's stated goals. "If the company is struggling with customer retention, talk about how AI can improve personalization," he suggests. "If operational inefficiencies are a problem, highlight AI-driven automation." The moment AI is framed as a business enabler rather than a technology trend, the conversation shifts from resistance to curiosity. Related:When All Else Fails If leadership refuses to embrace AI, it’s important to document the cost of inaction, Mains says. "Keep track of inefficiencies, missed opportunities, and competitor advancements," he recommends. Sometimes, leadership only shifts when management’s view of the risks of staying stagnant outweigh the risks of change. "If a company refuses to innovate despite clear benefits, that’s a red flag for long-term growth." Final Thoughts For enterprises that have so far done little or nothing in the way of AI deployment, the technology may appear optional, McQuarrie observes. Yet soon, operating without AI will become as unthinkable as running a business without the internet. Enterprise leaders who delay AI adoption risk falling behind the competition. "The best approach is to embrace a mindset of humility and curiosity -- actively seek out knowledge, ask questions, and learn from peers who are already seeing AI’s impact," he says. "To stay competitive in this rapidly evolving landscape, leaders should start now." The best companies aren't just using AI to improve; they're using the technology to redefine how they do business, Mains says. Leaders who recognize AI as a business accelerator will be the ones leading their industries in the next decade. "Those who hesitate? They’ll be playing catch-up." he concludes. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #how #convince #management #colleagues #that
    WWW.INFORMATIONWEEK.COM
    How to Convince Management Colleagues That AI Isn't a Passing Fad
    John Edwards, Technology Journalist & AuthorJune 4, 20254 Min ReadRancz Andrei via Alamy Stock PhotoIt may be hard to believe, but some senior executives actually believe that AI's arrival isn't a ground-shaking event. These individuals tend to be convinced that while AI may be a useful tool in certain situations, it's not going to change business in any truly meaningful way. Call them skeptics or call them realists, but such individuals really do exist, and it's the enterprise's CIOs and other IT leaders who need to gently guide them into reality. AI adoption tends to fall into three mindsets: early adopters who recognize its benefits, skeptics who fear its risks, and a large middle group -- those who are curious, but uncertain, observes Dave McQuarrie, HP's chief commercial officer in an online interview. "The key to closing the AI adoption gap lies in engaging this middle group, equipping them with knowledge, and guiding them through practical implementation." Effective Approaches The most important move is simply getting started. Establish a group of advocates in your company to serve as your early AI adopters, McQuarrie says. "Pick two or three processes to completely automate rather than casting a wide net, and use these as case studies to learn from," he advises. "By beginning with a subset of users, leaders can develop a solid foundation as they roll out the tool more widely across their business." Related:Start small, gather data, and present your use case, demonstrating how AI can support you and your colleagues to do your jobs better and faster, recommends Nicola Cain, CEO and principal consultant at Handley Gill Limited, a UK-based legal, regulatory and compliance consultancy. "This could be by analyzing customer interactions to demonstrate how the introduction of a chatbot to give customers prompt answers to easily addressed questions ... or showing how vast volumes of network log data could be analyzed by AI to identify potentially malign incidents that warrant further investigation," she says in an email interview. Changing Mindsets Question the skeptical leader about their biggest business bottleneck, suggests Jeff Mains, CEO of business consulting firm Champion Leadership Group. "Whether it’s slow decision-making, inconsistent customer experiences, or operational inefficiencies, there's a strategic AI-driven solution for nearly every major business challenge," he explains in an online interview. "The key is showing leaders how AI directly solves their most pressing problems today." When dealing with a reluctant executive, start by identifying an AI use case, Cain says. "AI functionality already performs strongly in areas like forecasting, recognition, event detection, personalization, interaction support, recommendations, and goal-driven optimization," she states. "Good business areas to identify a potential use case could therefore be in finance, customer service, marketing, cyber security, or stock control." Related:Strengthening Your Case Executives respond to proof, not promises, Mains says. "Instead of leading with research reports, I’ve found that real, industry-specific case studies are far more impactful," he observes. "If a direct competitor has successfully integrated AI into sales, marketing, or operations, use that example, because it creates urgency." Instead of just citing AI-driven efficiency gains, Mains recommends framing AI as a way to free-up leadership to focus on high-level strategy rather than day-to-day operations. Instead of trying to pitch AI in broad terms, Mains advises aligning the technology to the company's stated goals. "If the company is struggling with customer retention, talk about how AI can improve personalization," he suggests. "If operational inefficiencies are a problem, highlight AI-driven automation." The moment AI is framed as a business enabler rather than a technology trend, the conversation shifts from resistance to curiosity. Related:When All Else Fails If leadership refuses to embrace AI, it’s important to document the cost of inaction, Mains says. "Keep track of inefficiencies, missed opportunities, and competitor advancements," he recommends. Sometimes, leadership only shifts when management’s view of the risks of staying stagnant outweigh the risks of change. "If a company refuses to innovate despite clear benefits, that’s a red flag for long-term growth." Final Thoughts For enterprises that have so far done little or nothing in the way of AI deployment, the technology may appear optional, McQuarrie observes. Yet soon, operating without AI will become as unthinkable as running a business without the internet. Enterprise leaders who delay AI adoption risk falling behind the competition. "The best approach is to embrace a mindset of humility and curiosity -- actively seek out knowledge, ask questions, and learn from peers who are already seeing AI’s impact," he says. "To stay competitive in this rapidly evolving landscape, leaders should start now." The best companies aren't just using AI to improve; they're using the technology to redefine how they do business, Mains says. Leaders who recognize AI as a business accelerator will be the ones leading their industries in the next decade. "Those who hesitate? They’ll be playing catch-up." he concludes. About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    Like
    Love
    Wow
    Sad
    Angry
    225
    0 Commentaires 0 Parts
  • Texas is headed for a drought—but lawmakers won’t do the one thing necessary to save its water supply

    LUBBOCK — Every winter, after the sea of cotton has been harvested in the South Plains and the ground looks barren, technicians with the High Plains Underground Water Conservation District check the water levels in nearly 75,000 wells across 16 counties.

    For years, their measurements have shown what farmers and water conservationists fear most—the Ogallala Aquifer, an underground water source that’s the lifeblood of the South Plains agriculture industry, is running dry.

    That’s because of a century-old law called the rule of capture.

    The rule is simple: If you own the land above an aquifer in Texas, the water underneath is yours. You can use as much as you want, as long as it’s not wasted or taken maliciously. The same applies to your neighbor. If they happen to use more water than you, then that’s just bad luck.

    To put it another way, landowners can mostly pump as much water as they choose without facing liability to surrounding landowners whose wells might be depleted as a result.

    Following the Dust Bowl—and to stave off catastrophe—state lawmakers created groundwater conservation districts in 1949 to protect what water is left. But their power to restrict landowners is limited.

    “The mission is to save as much water possible for as long as possible, with as little impact on private property rights as possible,” said Jason Coleman, manager for the High Plains Underground Water Conservation District. “How do you do that? It’s a difficult task.”

    A 1953 map of the wells in Lubbock County hangs in the office of the groundwater district.Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply. Texas does not have enough water to meet demand if the state is stricken with a historic drought, according to the Texas Water Development Board, the state agency that manages Texas’ water supply.

    Lawmakers want to invest in every corner to save the state’s water. This week, they reached a historic billion deal on water projects.

    High Plains Underground Water District General Manager Jason Coleman stands in the district’s meeting room on May 21 in Lubbock.But no one wants to touch the rule of capture. In a state known for rugged individualism, politically speaking, reforming the law is tantamount to stripping away freedoms.

    “There probably are opportunities to vest groundwater districts with additional authority,” said Amy Hardberger, director for the Texas Tech University Center for Water Law and Policy. “I don’t think the political climate is going to do that.”

    State Sen. Charles Perry, a Lubbock Republican, and Rep. Cody Harris, a Palestine Republican, led the effort on water in Austin this year. Neither responded to requests for comment.

    Carlos Rubinstein, a water expert with consulting firm RSAH2O and a former chairman of the water development board, said the rule has been relied upon so long that it would be near impossible to undo the law.

    “I think it’s better to spend time working within the rules,” Rubinstein said. “And respect the rule of capture, yet also recognize that, in and of itself, it causes problems.”

    Even though groundwater districts were created to regulate groundwater, the law effectively stops them from doing so, or they risk major lawsuits. The state water plan, which spells out how the state’s water is to be used, acknowledges the shortfall. Groundwater availability is expected to decline by 25% by 2070, mostly due to reduced supply in the Ogallala and Edwards-Trinity aquifers. Together, the aquifers stretch across West Texas and up through the Panhandle.

    By itself, the Ogallala has an estimated three trillion gallons of water. Though the overwhelming majority in Texas is used by farmers. It’s expected to face a 50% decline by 2070.

    Groundwater is 54% of the state’s total water supply and is the state’s most vulnerable natural resource. It’s created by rainfall and other precipitation, and seeps into the ground. Like surface water, groundwater is heavily affected by ongoing droughts and prolonged heat waves. However, the state has more say in regulating surface water than it does groundwater. Surface water laws have provisions that cut supply to newer users in a drought and prohibit transferring surface water outside of basins.

    Historically, groundwater has been used by agriculture in the High Plains. However, as surface water evaporates at a quicker clip, cities and businesses are increasingly interested in tapping the underground resource. As Texas’ population continues to grow and surface water declines, groundwater will be the prize in future fights for water.

    In many ways, the damage is done in the High Plains, a region that spans from the top of the Panhandle down past Lubbock. The Ogallala Aquifer runs beneath the region, and it’s faced depletion to the point of no return, according to experts. Simply put: The Ogallala is not refilling to keep up with demand.

    “It’s a creeping disaster,” said Robert Mace, executive director of the Meadows Center for Water and the Environment. “It isn’t like you wake up tomorrow and nobody can pump anymore. It’s just happening slowly, every year.”Groundwater districts and the law

    The High Plains Water District was the first groundwater district created in Texas.

    Over a protracted multi-year fight, the Legislature created these new local government bodies in 1949, with voter approval, enshrining the new stewards of groundwater into the state Constitution.

    If the lawmakers hoped to embolden local officials to manage the troves of water under the soil, they failed. There are areas with groundwater that don’t have conservation districts. Each groundwater districts has different powers. In practice, most water districts permit wells and make decisions on spacing and location to meet the needs of the property owner.

    The one thing all groundwater districts have in common: They stop short of telling landowners they can’t pump water.

    In the seven decades since groundwater districts were created, a series of lawsuits have effectively strangled groundwater districts. Even as water levels decline from use and drought, districts still get regular requests for new wells. They won’t say no out of fear of litigation.

    The field technician coverage area is seen in Nathaniel Bibbs’ office at the High Plains Underground Water District. Bibbs is a permit assistant for the district.“You have a host of different decisions to make as it pertains to management of groundwater,” Coleman said. “That list has grown over the years.”

    The possibility of lawsuits makes groundwater districts hesitant to regulate usage or put limitations on new well permits. Groundwater districts have to defend themselves in lawsuits, and most lack the resources to do so.

    A well spacing guide is seen in Nathaniel Bibbs’ office.“The law works against us in that way,” Hardberger, with Texas Tech University, said. “It means one large tool in our toolbox, regulation, is limited.”

    The most recent example is a lawsuit between the Braggs Farm and the Edwards Aquifer Authority. The farm requested permits for two pecan orchards in Medina County, outside San Antonio. The authority granted only one and limited how much water could be used based on state law.

    It wasn’t an arbitrary decision. The authority said it followed the statute set by the Legislature to determine the permit.

    “That’s all they were guaranteed,” said Gregory Ellis, the first general manager of the authority, referring to the water available to the farm.

    The Braggs family filed a takings lawsuit against the authority. This kind of claim can be filed when any level of government—including groundwater districts—takes private property for public use without paying for the owner’s losses.

    Braggs won. It is the only successful water-related takings claim in Texas, and it made groundwater laws murkier. It cost the authority million.

    “I think it should have been paid by the state Legislature,” Ellis said. “They’re the ones who designed that permitting system. But that didn’t happen.”

    An appeals court upheld the ruling in 2013, and the Texas Supreme Court denied petitions to consider appeals. However, the state’s supreme court has previously suggested the Legislature could enhance the powers of the groundwater districts and regulate groundwater like surface water, just as many other states have done.

    While the laws are complicated, Ellis said the fundamental rule of capture has benefits. It has saved Texas’ legal system from a flurry of lawsuits between well owners.

    “If they had said ‘Yes, you can sue your neighbor for damaging your well,’ where does it stop?” Ellis asked. “Everybody sues everybody.”

    Coleman, the High Plains district’s manager, said some people want groundwater districts to have more power, while others think they have too much. Well owners want restrictions for others, but not on them, he said.

    “You’re charged as a district with trying to apply things uniformly and fairly,” Coleman said.

    Can’t reverse the past

    Two tractors were dropping seeds around Walt Hagood’s farm as he turned on his irrigation system for the first time this year. He didn’t plan on using much water. It’s too precious.

    The cotton farm stretches across 2,350 acres on the outskirts of Wolfforth, a town 12 miles southwest of Lubbock. Hagood irrigates about 80 acres of land, and prays that rain takes care of the rest.

    Walt Hagood drives across his farm on May 12, in Wolfforth. Hagood utilizes “dry farming,” a technique that relies on natural rainfall.“We used to have a lot of irrigated land with adequate water to make a crop,” Hagood said. “We don’t have that anymore.”

    The High Plains is home to cotton and cattle, multi-billion-dollar agricultural industries. The success is in large part due to the Ogallala. Since its discovery, the aquifer has helped farms around the region spring up through irrigation, a way for farmers to water their crops instead of waiting for rain that may not come. But as water in the aquifer declines, there are growing concerns that there won’t be enough water to support agriculture in the future.

    At the peak of irrigation development, more than 8.5 million acres were irrigated in Texas. About 65% of that was in the High Plains. In the decades since the irrigation boom, High Plains farmers have resorted to methods that might save water and keep their livelihoods afloat. They’ve changed their irrigation systems so water is used more efficiently. They grow cover crops so their soil is more likely to soak up rainwater. Some use apps to see where water is needed so it’s not wasted.

    A furrow irrigation is seen at Walt Hagood’s cotton farm.Farmers who have not changed their irrigation systems might not have a choice in the near future. It can take a week to pump an inch of water in some areas from the aquifer because of how little water is left. As conditions change underground, they are forced to drill deeper for water. That causes additional problems. Calcium can build up, and the water is of poorer quality. And when the water is used to spray crops through a pivot irrigation system, it’s more of a humidifier as water quickly evaporates in the heat.

    According to the groundwater district’s most recent management plan, 2 million acres in the district use groundwater for irrigation. About 95% of water from the Ogallala is used for irrigated agriculture. The plan states that the irrigated farms “afford economic stability to the area and support a number of other industries.”

    The state water plan shows groundwater supply is expected to decline, and drought won’t be the only factor causing a shortage. Demand for municipal use outweighs irrigation use, reflecting the state’s future growth. In Region O, which is the South Plains, water for irrigation declines by 2070 while demand for municipal use rises because of population growth in the region.

    Coleman, with the High Plains groundwater district, often thinks about how the aquifer will hold up with future growth. There are some factors at play with water planning that are nearly impossible to predict and account for, Coleman said. Declining surface water could make groundwater a source for municipalities that didn’t depend on it before. Regions known for having big, open patches of land, like the High Plains, could be attractive to incoming businesses. People could move to the country and want to drill a well, with no understanding of water availability.

    The state will continue to grow, Coleman said, and all the incoming businesses and industries will undoubtedly need water.

    “We could say ‘Well, it’s no one’s fault. We didn’t know that factory would need 20,000 acre-feet of water a year,” Coleman said. “It’s not happening right now, but what’s around the corner?”

    Coleman said this puts agriculture in a tenuous position. The region is full of small towns that depend on agriculture and have supporting businesses, like cotton gins, equipment and feed stores, and pesticide and fertilizer sprayers. This puts pressure on the High Plains water district, along with the two regional water planning groups in the region, to keep agriculture alive.

    “Districts are not trying to reduce pumping down to a sustainable level,” said Mace with the Meadows Foundation. “And I don’t fault them for that, because doing that is economic devastation in a region with farmers.”

    Hagood, the cotton farmer, doesn’t think reforming groundwater rights is the way to solve it. What’s done is done, he said.

    “Our U.S. Constitution protects our private property rights, and that’s what this is all about,” Hagood said. “Any time we have a regulation and people are given more authority, it doesn’t work out right for everybody.”

    Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply.What can be done

    The state water plan recommends irrigation conservation as a strategy. It’s also the least costly water management method.

    But that strategy is fraught. Farmers need to irrigate in times of drought, and telling them to stop can draw criticism.

    In Eastern New Mexico, the Ogallala Land and Water Conservancy, a nonprofit organization, has been retiring irrigation wells. Landowners keep their water rights, and the organization pays them to stop irrigating their farms. Landowners get paid every year as part of the voluntary agreement, and they can end it at any point.

    Ladona Clayton, executive director of the organization, said they have been criticized, with their efforts being called a “war” and “land grab.” They also get pushback on why the responsibility falls on farmers. She said it’s because of how much water is used for irrigation. They have to be aggressive in their approach, she said. The aquifer supplies water to the Cannon Air Force Base.

    “We don’t want them to stop agricultural production,” Clayton said. “But for me to say it will be the same level that irrigation can support would be untrue.”

    There is another possible lifeline that people in the High Plains are eyeing as a solution: the Dockum Aquifer. It’s a minor aquifer that underlies part of the Ogallala, so it would be accessible to farmers and ranchers in the region. The High Plains Water District also oversees this aquifer.

    If it seems too good to be true—that the most irrigated part of Texas would just so happen to have another abundant supply of water flowing underneath—it’s because there’s a catch. The Dockum is full of extremely salty brackish water. Some counties can use the water for irrigation and drinking water without treatment, but it’s unusable in others. According to the groundwater district, a test well in Lubbock County pulled up water that was as salty as seawater.

    Rubinstein, the former water development board chairman, said there are pockets of brackish groundwater in Texas that haven’t been tapped yet. It would be enough to meet the needs on the horizon, but it would also be very expensive to obtain and use. A landowner would have to go deeper to get it, then pump the water over a longer distance.

    “That costs money, and then you have to treat it on top of that,” Rubinstein said. “But, it is water.”

    Landowners have expressed interest in using desalination, a treatment method to lower dissolved salt levels. Desalination of produced and brackish water is one of the ideas that was being floated around at the Legislature this year, along with building a pipeline to move water across the state. Hagood, the farmer, is skeptical. He thinks whatever water they move could get used up before it makes it all the way to West Texas.

    There is always brackish groundwater. Another aquifer brings the chance of history repeating—if the Dockum aquifer is treated so its water is usable, will people drain it, too?

    Hagood said there would have to be limits.

    Disclosure: Edwards Aquifer Authority and Texas Tech University have been financial supporters of The Texas Tribune. Financial supporters play no role in the Tribune’s journalism. Find a complete list of them here.

    This article originally appeared in The Texas Tribune, a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org.
    #texas #headed #droughtbut #lawmakers #wont
    Texas is headed for a drought—but lawmakers won’t do the one thing necessary to save its water supply
    LUBBOCK — Every winter, after the sea of cotton has been harvested in the South Plains and the ground looks barren, technicians with the High Plains Underground Water Conservation District check the water levels in nearly 75,000 wells across 16 counties. For years, their measurements have shown what farmers and water conservationists fear most—the Ogallala Aquifer, an underground water source that’s the lifeblood of the South Plains agriculture industry, is running dry. That’s because of a century-old law called the rule of capture. The rule is simple: If you own the land above an aquifer in Texas, the water underneath is yours. You can use as much as you want, as long as it’s not wasted or taken maliciously. The same applies to your neighbor. If they happen to use more water than you, then that’s just bad luck. To put it another way, landowners can mostly pump as much water as they choose without facing liability to surrounding landowners whose wells might be depleted as a result. Following the Dust Bowl—and to stave off catastrophe—state lawmakers created groundwater conservation districts in 1949 to protect what water is left. But their power to restrict landowners is limited. “The mission is to save as much water possible for as long as possible, with as little impact on private property rights as possible,” said Jason Coleman, manager for the High Plains Underground Water Conservation District. “How do you do that? It’s a difficult task.” A 1953 map of the wells in Lubbock County hangs in the office of the groundwater district.Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply. Texas does not have enough water to meet demand if the state is stricken with a historic drought, according to the Texas Water Development Board, the state agency that manages Texas’ water supply. Lawmakers want to invest in every corner to save the state’s water. This week, they reached a historic billion deal on water projects. High Plains Underground Water District General Manager Jason Coleman stands in the district’s meeting room on May 21 in Lubbock.But no one wants to touch the rule of capture. In a state known for rugged individualism, politically speaking, reforming the law is tantamount to stripping away freedoms. “There probably are opportunities to vest groundwater districts with additional authority,” said Amy Hardberger, director for the Texas Tech University Center for Water Law and Policy. “I don’t think the political climate is going to do that.” State Sen. Charles Perry, a Lubbock Republican, and Rep. Cody Harris, a Palestine Republican, led the effort on water in Austin this year. Neither responded to requests for comment. Carlos Rubinstein, a water expert with consulting firm RSAH2O and a former chairman of the water development board, said the rule has been relied upon so long that it would be near impossible to undo the law. “I think it’s better to spend time working within the rules,” Rubinstein said. “And respect the rule of capture, yet also recognize that, in and of itself, it causes problems.” Even though groundwater districts were created to regulate groundwater, the law effectively stops them from doing so, or they risk major lawsuits. The state water plan, which spells out how the state’s water is to be used, acknowledges the shortfall. Groundwater availability is expected to decline by 25% by 2070, mostly due to reduced supply in the Ogallala and Edwards-Trinity aquifers. Together, the aquifers stretch across West Texas and up through the Panhandle. By itself, the Ogallala has an estimated three trillion gallons of water. Though the overwhelming majority in Texas is used by farmers. It’s expected to face a 50% decline by 2070. Groundwater is 54% of the state’s total water supply and is the state’s most vulnerable natural resource. It’s created by rainfall and other precipitation, and seeps into the ground. Like surface water, groundwater is heavily affected by ongoing droughts and prolonged heat waves. However, the state has more say in regulating surface water than it does groundwater. Surface water laws have provisions that cut supply to newer users in a drought and prohibit transferring surface water outside of basins. Historically, groundwater has been used by agriculture in the High Plains. However, as surface water evaporates at a quicker clip, cities and businesses are increasingly interested in tapping the underground resource. As Texas’ population continues to grow and surface water declines, groundwater will be the prize in future fights for water. In many ways, the damage is done in the High Plains, a region that spans from the top of the Panhandle down past Lubbock. The Ogallala Aquifer runs beneath the region, and it’s faced depletion to the point of no return, according to experts. Simply put: The Ogallala is not refilling to keep up with demand. “It’s a creeping disaster,” said Robert Mace, executive director of the Meadows Center for Water and the Environment. “It isn’t like you wake up tomorrow and nobody can pump anymore. It’s just happening slowly, every year.”Groundwater districts and the law The High Plains Water District was the first groundwater district created in Texas. Over a protracted multi-year fight, the Legislature created these new local government bodies in 1949, with voter approval, enshrining the new stewards of groundwater into the state Constitution. If the lawmakers hoped to embolden local officials to manage the troves of water under the soil, they failed. There are areas with groundwater that don’t have conservation districts. Each groundwater districts has different powers. In practice, most water districts permit wells and make decisions on spacing and location to meet the needs of the property owner. The one thing all groundwater districts have in common: They stop short of telling landowners they can’t pump water. In the seven decades since groundwater districts were created, a series of lawsuits have effectively strangled groundwater districts. Even as water levels decline from use and drought, districts still get regular requests for new wells. They won’t say no out of fear of litigation. The field technician coverage area is seen in Nathaniel Bibbs’ office at the High Plains Underground Water District. Bibbs is a permit assistant for the district.“You have a host of different decisions to make as it pertains to management of groundwater,” Coleman said. “That list has grown over the years.” The possibility of lawsuits makes groundwater districts hesitant to regulate usage or put limitations on new well permits. Groundwater districts have to defend themselves in lawsuits, and most lack the resources to do so. A well spacing guide is seen in Nathaniel Bibbs’ office.“The law works against us in that way,” Hardberger, with Texas Tech University, said. “It means one large tool in our toolbox, regulation, is limited.” The most recent example is a lawsuit between the Braggs Farm and the Edwards Aquifer Authority. The farm requested permits for two pecan orchards in Medina County, outside San Antonio. The authority granted only one and limited how much water could be used based on state law. It wasn’t an arbitrary decision. The authority said it followed the statute set by the Legislature to determine the permit. “That’s all they were guaranteed,” said Gregory Ellis, the first general manager of the authority, referring to the water available to the farm. The Braggs family filed a takings lawsuit against the authority. This kind of claim can be filed when any level of government—including groundwater districts—takes private property for public use without paying for the owner’s losses. Braggs won. It is the only successful water-related takings claim in Texas, and it made groundwater laws murkier. It cost the authority million. “I think it should have been paid by the state Legislature,” Ellis said. “They’re the ones who designed that permitting system. But that didn’t happen.” An appeals court upheld the ruling in 2013, and the Texas Supreme Court denied petitions to consider appeals. However, the state’s supreme court has previously suggested the Legislature could enhance the powers of the groundwater districts and regulate groundwater like surface water, just as many other states have done. While the laws are complicated, Ellis said the fundamental rule of capture has benefits. It has saved Texas’ legal system from a flurry of lawsuits between well owners. “If they had said ‘Yes, you can sue your neighbor for damaging your well,’ where does it stop?” Ellis asked. “Everybody sues everybody.” Coleman, the High Plains district’s manager, said some people want groundwater districts to have more power, while others think they have too much. Well owners want restrictions for others, but not on them, he said. “You’re charged as a district with trying to apply things uniformly and fairly,” Coleman said. Can’t reverse the past Two tractors were dropping seeds around Walt Hagood’s farm as he turned on his irrigation system for the first time this year. He didn’t plan on using much water. It’s too precious. The cotton farm stretches across 2,350 acres on the outskirts of Wolfforth, a town 12 miles southwest of Lubbock. Hagood irrigates about 80 acres of land, and prays that rain takes care of the rest. Walt Hagood drives across his farm on May 12, in Wolfforth. Hagood utilizes “dry farming,” a technique that relies on natural rainfall.“We used to have a lot of irrigated land with adequate water to make a crop,” Hagood said. “We don’t have that anymore.” The High Plains is home to cotton and cattle, multi-billion-dollar agricultural industries. The success is in large part due to the Ogallala. Since its discovery, the aquifer has helped farms around the region spring up through irrigation, a way for farmers to water their crops instead of waiting for rain that may not come. But as water in the aquifer declines, there are growing concerns that there won’t be enough water to support agriculture in the future. At the peak of irrigation development, more than 8.5 million acres were irrigated in Texas. About 65% of that was in the High Plains. In the decades since the irrigation boom, High Plains farmers have resorted to methods that might save water and keep their livelihoods afloat. They’ve changed their irrigation systems so water is used more efficiently. They grow cover crops so their soil is more likely to soak up rainwater. Some use apps to see where water is needed so it’s not wasted. A furrow irrigation is seen at Walt Hagood’s cotton farm.Farmers who have not changed their irrigation systems might not have a choice in the near future. It can take a week to pump an inch of water in some areas from the aquifer because of how little water is left. As conditions change underground, they are forced to drill deeper for water. That causes additional problems. Calcium can build up, and the water is of poorer quality. And when the water is used to spray crops through a pivot irrigation system, it’s more of a humidifier as water quickly evaporates in the heat. According to the groundwater district’s most recent management plan, 2 million acres in the district use groundwater for irrigation. About 95% of water from the Ogallala is used for irrigated agriculture. The plan states that the irrigated farms “afford economic stability to the area and support a number of other industries.” The state water plan shows groundwater supply is expected to decline, and drought won’t be the only factor causing a shortage. Demand for municipal use outweighs irrigation use, reflecting the state’s future growth. In Region O, which is the South Plains, water for irrigation declines by 2070 while demand for municipal use rises because of population growth in the region. Coleman, with the High Plains groundwater district, often thinks about how the aquifer will hold up with future growth. There are some factors at play with water planning that are nearly impossible to predict and account for, Coleman said. Declining surface water could make groundwater a source for municipalities that didn’t depend on it before. Regions known for having big, open patches of land, like the High Plains, could be attractive to incoming businesses. People could move to the country and want to drill a well, with no understanding of water availability. The state will continue to grow, Coleman said, and all the incoming businesses and industries will undoubtedly need water. “We could say ‘Well, it’s no one’s fault. We didn’t know that factory would need 20,000 acre-feet of water a year,” Coleman said. “It’s not happening right now, but what’s around the corner?” Coleman said this puts agriculture in a tenuous position. The region is full of small towns that depend on agriculture and have supporting businesses, like cotton gins, equipment and feed stores, and pesticide and fertilizer sprayers. This puts pressure on the High Plains water district, along with the two regional water planning groups in the region, to keep agriculture alive. “Districts are not trying to reduce pumping down to a sustainable level,” said Mace with the Meadows Foundation. “And I don’t fault them for that, because doing that is economic devastation in a region with farmers.” Hagood, the cotton farmer, doesn’t think reforming groundwater rights is the way to solve it. What’s done is done, he said. “Our U.S. Constitution protects our private property rights, and that’s what this is all about,” Hagood said. “Any time we have a regulation and people are given more authority, it doesn’t work out right for everybody.” Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply.What can be done The state water plan recommends irrigation conservation as a strategy. It’s also the least costly water management method. But that strategy is fraught. Farmers need to irrigate in times of drought, and telling them to stop can draw criticism. In Eastern New Mexico, the Ogallala Land and Water Conservancy, a nonprofit organization, has been retiring irrigation wells. Landowners keep their water rights, and the organization pays them to stop irrigating their farms. Landowners get paid every year as part of the voluntary agreement, and they can end it at any point. Ladona Clayton, executive director of the organization, said they have been criticized, with their efforts being called a “war” and “land grab.” They also get pushback on why the responsibility falls on farmers. She said it’s because of how much water is used for irrigation. They have to be aggressive in their approach, she said. The aquifer supplies water to the Cannon Air Force Base. “We don’t want them to stop agricultural production,” Clayton said. “But for me to say it will be the same level that irrigation can support would be untrue.” There is another possible lifeline that people in the High Plains are eyeing as a solution: the Dockum Aquifer. It’s a minor aquifer that underlies part of the Ogallala, so it would be accessible to farmers and ranchers in the region. The High Plains Water District also oversees this aquifer. If it seems too good to be true—that the most irrigated part of Texas would just so happen to have another abundant supply of water flowing underneath—it’s because there’s a catch. The Dockum is full of extremely salty brackish water. Some counties can use the water for irrigation and drinking water without treatment, but it’s unusable in others. According to the groundwater district, a test well in Lubbock County pulled up water that was as salty as seawater. Rubinstein, the former water development board chairman, said there are pockets of brackish groundwater in Texas that haven’t been tapped yet. It would be enough to meet the needs on the horizon, but it would also be very expensive to obtain and use. A landowner would have to go deeper to get it, then pump the water over a longer distance. “That costs money, and then you have to treat it on top of that,” Rubinstein said. “But, it is water.” Landowners have expressed interest in using desalination, a treatment method to lower dissolved salt levels. Desalination of produced and brackish water is one of the ideas that was being floated around at the Legislature this year, along with building a pipeline to move water across the state. Hagood, the farmer, is skeptical. He thinks whatever water they move could get used up before it makes it all the way to West Texas. There is always brackish groundwater. Another aquifer brings the chance of history repeating—if the Dockum aquifer is treated so its water is usable, will people drain it, too? Hagood said there would have to be limits. Disclosure: Edwards Aquifer Authority and Texas Tech University have been financial supporters of The Texas Tribune. Financial supporters play no role in the Tribune’s journalism. Find a complete list of them here. This article originally appeared in The Texas Tribune, a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org. #texas #headed #droughtbut #lawmakers #wont
    WWW.FASTCOMPANY.COM
    Texas is headed for a drought—but lawmakers won’t do the one thing necessary to save its water supply
    LUBBOCK — Every winter, after the sea of cotton has been harvested in the South Plains and the ground looks barren, technicians with the High Plains Underground Water Conservation District check the water levels in nearly 75,000 wells across 16 counties. For years, their measurements have shown what farmers and water conservationists fear most—the Ogallala Aquifer, an underground water source that’s the lifeblood of the South Plains agriculture industry, is running dry. That’s because of a century-old law called the rule of capture. The rule is simple: If you own the land above an aquifer in Texas, the water underneath is yours. You can use as much as you want, as long as it’s not wasted or taken maliciously. The same applies to your neighbor. If they happen to use more water than you, then that’s just bad luck. To put it another way, landowners can mostly pump as much water as they choose without facing liability to surrounding landowners whose wells might be depleted as a result. Following the Dust Bowl—and to stave off catastrophe—state lawmakers created groundwater conservation districts in 1949 to protect what water is left. But their power to restrict landowners is limited. “The mission is to save as much water possible for as long as possible, with as little impact on private property rights as possible,” said Jason Coleman, manager for the High Plains Underground Water Conservation District. “How do you do that? It’s a difficult task.” A 1953 map of the wells in Lubbock County hangs in the office of the groundwater district. [Photo: Annie Rice for The Texas Tribune] Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply. Texas does not have enough water to meet demand if the state is stricken with a historic drought, according to the Texas Water Development Board, the state agency that manages Texas’ water supply. Lawmakers want to invest in every corner to save the state’s water. This week, they reached a historic $20 billion deal on water projects. High Plains Underground Water District General Manager Jason Coleman stands in the district’s meeting room on May 21 in Lubbock. [Photo: Annie Rice for The Texas Tribune] But no one wants to touch the rule of capture. In a state known for rugged individualism, politically speaking, reforming the law is tantamount to stripping away freedoms. “There probably are opportunities to vest groundwater districts with additional authority,” said Amy Hardberger, director for the Texas Tech University Center for Water Law and Policy. “I don’t think the political climate is going to do that.” State Sen. Charles Perry, a Lubbock Republican, and Rep. Cody Harris, a Palestine Republican, led the effort on water in Austin this year. Neither responded to requests for comment. Carlos Rubinstein, a water expert with consulting firm RSAH2O and a former chairman of the water development board, said the rule has been relied upon so long that it would be near impossible to undo the law. “I think it’s better to spend time working within the rules,” Rubinstein said. “And respect the rule of capture, yet also recognize that, in and of itself, it causes problems.” Even though groundwater districts were created to regulate groundwater, the law effectively stops them from doing so, or they risk major lawsuits. The state water plan, which spells out how the state’s water is to be used, acknowledges the shortfall. Groundwater availability is expected to decline by 25% by 2070, mostly due to reduced supply in the Ogallala and Edwards-Trinity aquifers. Together, the aquifers stretch across West Texas and up through the Panhandle. By itself, the Ogallala has an estimated three trillion gallons of water. Though the overwhelming majority in Texas is used by farmers. It’s expected to face a 50% decline by 2070. Groundwater is 54% of the state’s total water supply and is the state’s most vulnerable natural resource. It’s created by rainfall and other precipitation, and seeps into the ground. Like surface water, groundwater is heavily affected by ongoing droughts and prolonged heat waves. However, the state has more say in regulating surface water than it does groundwater. Surface water laws have provisions that cut supply to newer users in a drought and prohibit transferring surface water outside of basins. Historically, groundwater has been used by agriculture in the High Plains. However, as surface water evaporates at a quicker clip, cities and businesses are increasingly interested in tapping the underground resource. As Texas’ population continues to grow and surface water declines, groundwater will be the prize in future fights for water. In many ways, the damage is done in the High Plains, a region that spans from the top of the Panhandle down past Lubbock. The Ogallala Aquifer runs beneath the region, and it’s faced depletion to the point of no return, according to experts. Simply put: The Ogallala is not refilling to keep up with demand. “It’s a creeping disaster,” said Robert Mace, executive director of the Meadows Center for Water and the Environment. “It isn’t like you wake up tomorrow and nobody can pump anymore. It’s just happening slowly, every year.” [Image: Yuriko Schumacher/The Texas Tribune] Groundwater districts and the law The High Plains Water District was the first groundwater district created in Texas. Over a protracted multi-year fight, the Legislature created these new local government bodies in 1949, with voter approval, enshrining the new stewards of groundwater into the state Constitution. If the lawmakers hoped to embolden local officials to manage the troves of water under the soil, they failed. There are areas with groundwater that don’t have conservation districts. Each groundwater districts has different powers. In practice, most water districts permit wells and make decisions on spacing and location to meet the needs of the property owner. The one thing all groundwater districts have in common: They stop short of telling landowners they can’t pump water. In the seven decades since groundwater districts were created, a series of lawsuits have effectively strangled groundwater districts. Even as water levels decline from use and drought, districts still get regular requests for new wells. They won’t say no out of fear of litigation. The field technician coverage area is seen in Nathaniel Bibbs’ office at the High Plains Underground Water District. Bibbs is a permit assistant for the district. [Photo: Annie Rice for The Texas Tribune] “You have a host of different decisions to make as it pertains to management of groundwater,” Coleman said. “That list has grown over the years.” The possibility of lawsuits makes groundwater districts hesitant to regulate usage or put limitations on new well permits. Groundwater districts have to defend themselves in lawsuits, and most lack the resources to do so. A well spacing guide is seen in Nathaniel Bibbs’ office. [Photo: Annie Rice for The Texas Tribune] “The law works against us in that way,” Hardberger, with Texas Tech University, said. “It means one large tool in our toolbox, regulation, is limited.” The most recent example is a lawsuit between the Braggs Farm and the Edwards Aquifer Authority. The farm requested permits for two pecan orchards in Medina County, outside San Antonio. The authority granted only one and limited how much water could be used based on state law. It wasn’t an arbitrary decision. The authority said it followed the statute set by the Legislature to determine the permit. “That’s all they were guaranteed,” said Gregory Ellis, the first general manager of the authority, referring to the water available to the farm. The Braggs family filed a takings lawsuit against the authority. This kind of claim can be filed when any level of government—including groundwater districts—takes private property for public use without paying for the owner’s losses. Braggs won. It is the only successful water-related takings claim in Texas, and it made groundwater laws murkier. It cost the authority $4.5 million. “I think it should have been paid by the state Legislature,” Ellis said. “They’re the ones who designed that permitting system. But that didn’t happen.” An appeals court upheld the ruling in 2013, and the Texas Supreme Court denied petitions to consider appeals. However, the state’s supreme court has previously suggested the Legislature could enhance the powers of the groundwater districts and regulate groundwater like surface water, just as many other states have done. While the laws are complicated, Ellis said the fundamental rule of capture has benefits. It has saved Texas’ legal system from a flurry of lawsuits between well owners. “If they had said ‘Yes, you can sue your neighbor for damaging your well,’ where does it stop?” Ellis asked. “Everybody sues everybody.” Coleman, the High Plains district’s manager, said some people want groundwater districts to have more power, while others think they have too much. Well owners want restrictions for others, but not on them, he said. “You’re charged as a district with trying to apply things uniformly and fairly,” Coleman said. Can’t reverse the past Two tractors were dropping seeds around Walt Hagood’s farm as he turned on his irrigation system for the first time this year. He didn’t plan on using much water. It’s too precious. The cotton farm stretches across 2,350 acres on the outskirts of Wolfforth, a town 12 miles southwest of Lubbock. Hagood irrigates about 80 acres of land, and prays that rain takes care of the rest. Walt Hagood drives across his farm on May 12, in Wolfforth. Hagood utilizes “dry farming,” a technique that relies on natural rainfall. [Photo: Annie Rice for The Texas Tribune] “We used to have a lot of irrigated land with adequate water to make a crop,” Hagood said. “We don’t have that anymore.” The High Plains is home to cotton and cattle, multi-billion-dollar agricultural industries. The success is in large part due to the Ogallala. Since its discovery, the aquifer has helped farms around the region spring up through irrigation, a way for farmers to water their crops instead of waiting for rain that may not come. But as water in the aquifer declines, there are growing concerns that there won’t be enough water to support agriculture in the future. At the peak of irrigation development, more than 8.5 million acres were irrigated in Texas. About 65% of that was in the High Plains. In the decades since the irrigation boom, High Plains farmers have resorted to methods that might save water and keep their livelihoods afloat. They’ve changed their irrigation systems so water is used more efficiently. They grow cover crops so their soil is more likely to soak up rainwater. Some use apps to see where water is needed so it’s not wasted. A furrow irrigation is seen at Walt Hagood’s cotton farm. [Photo: Annie Rice for The Texas Tribune] Farmers who have not changed their irrigation systems might not have a choice in the near future. It can take a week to pump an inch of water in some areas from the aquifer because of how little water is left. As conditions change underground, they are forced to drill deeper for water. That causes additional problems. Calcium can build up, and the water is of poorer quality. And when the water is used to spray crops through a pivot irrigation system, it’s more of a humidifier as water quickly evaporates in the heat. According to the groundwater district’s most recent management plan, 2 million acres in the district use groundwater for irrigation. About 95% of water from the Ogallala is used for irrigated agriculture. The plan states that the irrigated farms “afford economic stability to the area and support a number of other industries.” The state water plan shows groundwater supply is expected to decline, and drought won’t be the only factor causing a shortage. Demand for municipal use outweighs irrigation use, reflecting the state’s future growth. In Region O, which is the South Plains, water for irrigation declines by 2070 while demand for municipal use rises because of population growth in the region. Coleman, with the High Plains groundwater district, often thinks about how the aquifer will hold up with future growth. There are some factors at play with water planning that are nearly impossible to predict and account for, Coleman said. Declining surface water could make groundwater a source for municipalities that didn’t depend on it before. Regions known for having big, open patches of land, like the High Plains, could be attractive to incoming businesses. People could move to the country and want to drill a well, with no understanding of water availability. The state will continue to grow, Coleman said, and all the incoming businesses and industries will undoubtedly need water. “We could say ‘Well, it’s no one’s fault. We didn’t know that factory would need 20,000 acre-feet of water a year,” Coleman said. “It’s not happening right now, but what’s around the corner?” Coleman said this puts agriculture in a tenuous position. The region is full of small towns that depend on agriculture and have supporting businesses, like cotton gins, equipment and feed stores, and pesticide and fertilizer sprayers. This puts pressure on the High Plains water district, along with the two regional water planning groups in the region, to keep agriculture alive. “Districts are not trying to reduce pumping down to a sustainable level,” said Mace with the Meadows Foundation. “And I don’t fault them for that, because doing that is economic devastation in a region with farmers.” Hagood, the cotton farmer, doesn’t think reforming groundwater rights is the way to solve it. What’s done is done, he said. “Our U.S. Constitution protects our private property rights, and that’s what this is all about,” Hagood said. “Any time we have a regulation and people are given more authority, it doesn’t work out right for everybody.” Rapid population growth, climate change, and aging water infrastructure all threaten the state’s water supply. [Photo: Annie Rice for The Texas Tribune] What can be done The state water plan recommends irrigation conservation as a strategy. It’s also the least costly water management method. But that strategy is fraught. Farmers need to irrigate in times of drought, and telling them to stop can draw criticism. In Eastern New Mexico, the Ogallala Land and Water Conservancy, a nonprofit organization, has been retiring irrigation wells. Landowners keep their water rights, and the organization pays them to stop irrigating their farms. Landowners get paid every year as part of the voluntary agreement, and they can end it at any point. Ladona Clayton, executive director of the organization, said they have been criticized, with their efforts being called a “war” and “land grab.” They also get pushback on why the responsibility falls on farmers. She said it’s because of how much water is used for irrigation. They have to be aggressive in their approach, she said. The aquifer supplies water to the Cannon Air Force Base. “We don’t want them to stop agricultural production,” Clayton said. “But for me to say it will be the same level that irrigation can support would be untrue.” There is another possible lifeline that people in the High Plains are eyeing as a solution: the Dockum Aquifer. It’s a minor aquifer that underlies part of the Ogallala, so it would be accessible to farmers and ranchers in the region. The High Plains Water District also oversees this aquifer. If it seems too good to be true—that the most irrigated part of Texas would just so happen to have another abundant supply of water flowing underneath—it’s because there’s a catch. The Dockum is full of extremely salty brackish water. Some counties can use the water for irrigation and drinking water without treatment, but it’s unusable in others. According to the groundwater district, a test well in Lubbock County pulled up water that was as salty as seawater. Rubinstein, the former water development board chairman, said there are pockets of brackish groundwater in Texas that haven’t been tapped yet. It would be enough to meet the needs on the horizon, but it would also be very expensive to obtain and use. A landowner would have to go deeper to get it, then pump the water over a longer distance. “That costs money, and then you have to treat it on top of that,” Rubinstein said. “But, it is water.” Landowners have expressed interest in using desalination, a treatment method to lower dissolved salt levels. Desalination of produced and brackish water is one of the ideas that was being floated around at the Legislature this year, along with building a pipeline to move water across the state. Hagood, the farmer, is skeptical. He thinks whatever water they move could get used up before it makes it all the way to West Texas. There is always brackish groundwater. Another aquifer brings the chance of history repeating—if the Dockum aquifer is treated so its water is usable, will people drain it, too? Hagood said there would have to be limits. Disclosure: Edwards Aquifer Authority and Texas Tech University have been financial supporters of The Texas Tribune. Financial supporters play no role in the Tribune’s journalism. Find a complete list of them here. This article originally appeared in The Texas Tribune, a member-supported, nonpartisan newsroom informing and engaging Texans on state politics and policy. Learn more at texastribune.org.
    0 Commentaires 0 Parts
  • How To Measure AI Efficiency and Productivity Gains

    John Edwards, Technology Journalist & AuthorMay 30, 20254 Min ReadTanapong Sungkaew via Alamy Stock PhotoAI adoption can help enterprises function more efficiently and productively in many internal and external areas. Yet to get the most value out of AI, CIOs and IT leaders need to find a way to measure their current and future gains.Measuring AI efficiency and productivity gains isn't always a straightforward process, however, observes Matt Sanchez, vice president of product for IBM's watsonx Orchestrate, a tool designed to automate tasks, focusing on the orchestration of AI assistants and AI agents."There are many factors to consider in order to gain an accurate picture of AI’s impact on your organization," Sanchez says,  in an email interview. He believes the key to measuring AI effectiveness starts with setting clear, data-driven goals. "What outcomes are you trying to achieve?" he asks. "Identifying the right key performance indicators -- KPIs -- that align with your overall strategy is a great place to start."Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview.Still, with the number of organizations adopting AI rapidly increasing, C-suites and boards are now prioritizing measurable ROI.Related:"We're seeing this firsthand while working with clients in the manufacturing space specifically who are aiming to make manufacturing processes smarter and increasingly software-defined," Gaus says.Measuring AI Efficiency: The ChallengeThe challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained."AI Project Measurement MethodsOnce AI projects are underway, Gaus says measuring real-world results is key. "This includes studying factors such as actual cost reductions, revenue boosts tied directly to AI, and progress in KPIs such as customer satisfaction or operational output. "This method allows organizations to track both the anticipated and actual benefits of their AI investments over time."Related:To effectively assess AI's impact on efficiency and productivity, it's important to connect AI initiatives with broader business goals and evaluate their progress at different stages, Gaus says."In the early stages, companies should focus on estimating the potential benefits, such as enhanced efficiency, revenue growth, or strategic advantages like stronger customer loyalty or reduced operational downtime." These projections can provide a clear understanding of how AI aligns with long-term objectives, Gaus adds.Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. "Metrics should be set prior to any investment to maximize benefits and mitigate biases, such as sunk cost fallacies, confirmation bias, anchoring bias, and the like."Key AI Value MetricsMetrics can vary depending on the industry and technology being used, Gaus says. "In sectors like manufacturing, AI value metrics include improvements in efficiency, productivity, and cost reduction." Yet specific metrics depend on the type of AI technology implemented, such as machine learning.Related:Beyond tracking metrics, it's important to ensure high-quality data is used to minimize biases in AI decision-making, Sanchez says. The end goal is for AI to support the human workforce, freeing users to focus on strategic and creative work and removing potential bottlenecks. "It's also important to remember that AI isn't a one-and-done deal. It's an ongoing process that needs regular evaluation and process adjustment as the organization transforms.”Spurling recommends beginning by studying three key metrics:Worker productivity: Understanding the value of increased task completion or reduced effort by measuring the effect on day-to-day activities like faster issue resolution, more efficient collaboration, reduced process waste, or increased output quality.Ability to scale: Operationalizing AI-based self-service tools, typically with natural language capabilities, across the entire organization beyond IT to enable task or job completion in real-time, with no need for external support or augmentation.User friendliness: Expanding organization effectiveness with data-driven insights as measured by the ability of non-technical business users to leverage AI via no-code, low-code platforms.Final Note: Aligning Business and TechnologyDeloitte's digital transformation research reveals that misalignment between business and technology leaders often leads to inaccurate ROI assessments, Gaus says. "To address this, it's crucial for both sides to agree on key value priorities and success metrics."He adds it's also important to look beyond immediate financial returns and to incorporate innovation-driven KPIs, such as experimentation toleration and agile team adoption. "Without this broader perspective, up to 20% of digital investment returns may not yield their full potential," Gaus warns. "By addressing these alignment issues and tracking a comprehensive set of metrics, organizations can maximize the value from AI initiatives while fostering long-term innovation."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    #how #measure #efficiency #productivity #gains
    How To Measure AI Efficiency and Productivity Gains
    John Edwards, Technology Journalist & AuthorMay 30, 20254 Min ReadTanapong Sungkaew via Alamy Stock PhotoAI adoption can help enterprises function more efficiently and productively in many internal and external areas. Yet to get the most value out of AI, CIOs and IT leaders need to find a way to measure their current and future gains.Measuring AI efficiency and productivity gains isn't always a straightforward process, however, observes Matt Sanchez, vice president of product for IBM's watsonx Orchestrate, a tool designed to automate tasks, focusing on the orchestration of AI assistants and AI agents."There are many factors to consider in order to gain an accurate picture of AI’s impact on your organization," Sanchez says,  in an email interview. He believes the key to measuring AI effectiveness starts with setting clear, data-driven goals. "What outcomes are you trying to achieve?" he asks. "Identifying the right key performance indicators -- KPIs -- that align with your overall strategy is a great place to start."Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview.Still, with the number of organizations adopting AI rapidly increasing, C-suites and boards are now prioritizing measurable ROI.Related:"We're seeing this firsthand while working with clients in the manufacturing space specifically who are aiming to make manufacturing processes smarter and increasingly software-defined," Gaus says.Measuring AI Efficiency: The ChallengeThe challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained."AI Project Measurement MethodsOnce AI projects are underway, Gaus says measuring real-world results is key. "This includes studying factors such as actual cost reductions, revenue boosts tied directly to AI, and progress in KPIs such as customer satisfaction or operational output. "This method allows organizations to track both the anticipated and actual benefits of their AI investments over time."Related:To effectively assess AI's impact on efficiency and productivity, it's important to connect AI initiatives with broader business goals and evaluate their progress at different stages, Gaus says."In the early stages, companies should focus on estimating the potential benefits, such as enhanced efficiency, revenue growth, or strategic advantages like stronger customer loyalty or reduced operational downtime." These projections can provide a clear understanding of how AI aligns with long-term objectives, Gaus adds.Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. "Metrics should be set prior to any investment to maximize benefits and mitigate biases, such as sunk cost fallacies, confirmation bias, anchoring bias, and the like."Key AI Value MetricsMetrics can vary depending on the industry and technology being used, Gaus says. "In sectors like manufacturing, AI value metrics include improvements in efficiency, productivity, and cost reduction." Yet specific metrics depend on the type of AI technology implemented, such as machine learning.Related:Beyond tracking metrics, it's important to ensure high-quality data is used to minimize biases in AI decision-making, Sanchez says. The end goal is for AI to support the human workforce, freeing users to focus on strategic and creative work and removing potential bottlenecks. "It's also important to remember that AI isn't a one-and-done deal. It's an ongoing process that needs regular evaluation and process adjustment as the organization transforms.”Spurling recommends beginning by studying three key metrics:Worker productivity: Understanding the value of increased task completion or reduced effort by measuring the effect on day-to-day activities like faster issue resolution, more efficient collaboration, reduced process waste, or increased output quality.Ability to scale: Operationalizing AI-based self-service tools, typically with natural language capabilities, across the entire organization beyond IT to enable task or job completion in real-time, with no need for external support or augmentation.User friendliness: Expanding organization effectiveness with data-driven insights as measured by the ability of non-technical business users to leverage AI via no-code, low-code platforms.Final Note: Aligning Business and TechnologyDeloitte's digital transformation research reveals that misalignment between business and technology leaders often leads to inaccurate ROI assessments, Gaus says. "To address this, it's crucial for both sides to agree on key value priorities and success metrics."He adds it's also important to look beyond immediate financial returns and to incorporate innovation-driven KPIs, such as experimentation toleration and agile team adoption. "Without this broader perspective, up to 20% of digital investment returns may not yield their full potential," Gaus warns. "By addressing these alignment issues and tracking a comprehensive set of metrics, organizations can maximize the value from AI initiatives while fostering long-term innovation."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #how #measure #efficiency #productivity #gains
    WWW.INFORMATIONWEEK.COM
    How To Measure AI Efficiency and Productivity Gains
    John Edwards, Technology Journalist & AuthorMay 30, 20254 Min ReadTanapong Sungkaew via Alamy Stock PhotoAI adoption can help enterprises function more efficiently and productively in many internal and external areas. Yet to get the most value out of AI, CIOs and IT leaders need to find a way to measure their current and future gains.Measuring AI efficiency and productivity gains isn't always a straightforward process, however, observes Matt Sanchez, vice president of product for IBM's watsonx Orchestrate, a tool designed to automate tasks, focusing on the orchestration of AI assistants and AI agents."There are many factors to consider in order to gain an accurate picture of AI’s impact on your organization," Sanchez says,  in an email interview. He believes the key to measuring AI effectiveness starts with setting clear, data-driven goals. "What outcomes are you trying to achieve?" he asks. "Identifying the right key performance indicators -- KPIs -- that align with your overall strategy is a great place to start."Measuring AI efficiency is a little like a "chicken or the egg" discussion, says Tim Gaus, smart manufacturing business leader at Deloitte Consulting. "A prerequisite for AI adoption is access to quality data, but data is also needed to show the adoption’s success," he advises in an online interview.Still, with the number of organizations adopting AI rapidly increasing, C-suites and boards are now prioritizing measurable ROI.Related:"We're seeing this firsthand while working with clients in the manufacturing space specifically who are aiming to make manufacturing processes smarter and increasingly software-defined," Gaus says.Measuring AI Efficiency: The ChallengeThe challenge in measuring AI efficiency depends on the type of AI and how it's ultimately used, Gaus says. Manufacturers, for example, have long used AI for predictive maintenance and quality control. "This can be easier to measure, since you can simply look at changes in breakdown or product defect frequencies," he notes. "However, for more complex AI use cases -- including using GenAI to train workers or serve as a form of knowledge retention -- it can be harder to nail down impact metrics and how they can be obtained."AI Project Measurement MethodsOnce AI projects are underway, Gaus says measuring real-world results is key. "This includes studying factors such as actual cost reductions, revenue boosts tied directly to AI, and progress in KPIs such as customer satisfaction or operational output. "This method allows organizations to track both the anticipated and actual benefits of their AI investments over time."Related:To effectively assess AI's impact on efficiency and productivity, it's important to connect AI initiatives with broader business goals and evaluate their progress at different stages, Gaus says."In the early stages, companies should focus on estimating the potential benefits, such as enhanced efficiency, revenue growth, or strategic advantages like stronger customer loyalty or reduced operational downtime." These projections can provide a clear understanding of how AI aligns with long-term objectives, Gaus adds.Measuring any emerging technology's impact on efficiency and productivity often takes time, but impacts are always among the top priorities for business leaders when evaluating any new technology, says Dan Spurling, senior vice president of product management at multi-cloud data platform provider Teradata. "Businesses should continue to use proven frameworks for measurement rather than create net-new frameworks," he advises in an online interview. "Metrics should be set prior to any investment to maximize benefits and mitigate biases, such as sunk cost fallacies, confirmation bias, anchoring bias, and the like."Key AI Value MetricsMetrics can vary depending on the industry and technology being used, Gaus says. "In sectors like manufacturing, AI value metrics include improvements in efficiency, productivity, and cost reduction." Yet specific metrics depend on the type of AI technology implemented, such as machine learning.Related:Beyond tracking metrics, it's important to ensure high-quality data is used to minimize biases in AI decision-making, Sanchez says. The end goal is for AI to support the human workforce, freeing users to focus on strategic and creative work and removing potential bottlenecks. "It's also important to remember that AI isn't a one-and-done deal. It's an ongoing process that needs regular evaluation and process adjustment as the organization transforms.”Spurling recommends beginning by studying three key metrics:Worker productivity: Understanding the value of increased task completion or reduced effort by measuring the effect on day-to-day activities like faster issue resolution, more efficient collaboration, reduced process waste, or increased output quality.Ability to scale: Operationalizing AI-based self-service tools, typically with natural language capabilities, across the entire organization beyond IT to enable task or job completion in real-time, with no need for external support or augmentation.User friendliness: Expanding organization effectiveness with data-driven insights as measured by the ability of non-technical business users to leverage AI via no-code, low-code platforms.Final Note: Aligning Business and TechnologyDeloitte's digital transformation research reveals that misalignment between business and technology leaders often leads to inaccurate ROI assessments, Gaus says. "To address this, it's crucial for both sides to agree on key value priorities and success metrics."He adds it's also important to look beyond immediate financial returns and to incorporate innovation-driven KPIs, such as experimentation toleration and agile team adoption. "Without this broader perspective, up to 20% of digital investment returns may not yield their full potential," Gaus warns. "By addressing these alignment issues and tracking a comprehensive set of metrics, organizations can maximize the value from AI initiatives while fostering long-term innovation."About the AuthorJohn EdwardsTechnology Journalist & AuthorJohn Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.See more from John EdwardsWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Commentaires 0 Parts
  • Pollard Thomas Edwards gets green light for next phase of £200m revamp around Hertfordshire station

    300 homes have already been built at Bishop’s Stortford siteA £200m plan by Network Rail to turn an area around Bishop’s Stortford station in Hertfordshire into new homes and commercial space has been approved by the local council.

    Under the plans, more than 700 homes will be built at the former Goods Yard 
    The network operator has teamed up with Kier’s property arm to turn the area around the Goods Yard into more than 400 homes.
    Called Solum Regeneration, the pair have already built more than 300 homes a multi-storey car park after East Herts district council gave the original scheme the green light in 2018.
    The latest proposals, designed by Pollard Thomas Edwards, follow a masterplan that was endorsed by the council three years ago. The scheme  includes 423 homes, improved pedestrian links from the station to the town centre and upgrades to the station forecourt. 
    Solum Regeneration is a joint venture between Network Rail and Kier Property which has been set up to bring private investment into the rail network by generating funds from the development of under-used railway land.
    #pollard #thomas #edwards #gets #green
    Pollard Thomas Edwards gets green light for next phase of £200m revamp around Hertfordshire station
    300 homes have already been built at Bishop’s Stortford siteA £200m plan by Network Rail to turn an area around Bishop’s Stortford station in Hertfordshire into new homes and commercial space has been approved by the local council. Under the plans, more than 700 homes will be built at the former Goods Yard  The network operator has teamed up with Kier’s property arm to turn the area around the Goods Yard into more than 400 homes. Called Solum Regeneration, the pair have already built more than 300 homes a multi-storey car park after East Herts district council gave the original scheme the green light in 2018. The latest proposals, designed by Pollard Thomas Edwards, follow a masterplan that was endorsed by the council three years ago. The scheme  includes 423 homes, improved pedestrian links from the station to the town centre and upgrades to the station forecourt.  Solum Regeneration is a joint venture between Network Rail and Kier Property which has been set up to bring private investment into the rail network by generating funds from the development of under-used railway land. #pollard #thomas #edwards #gets #green
    WWW.BDONLINE.CO.UK
    Pollard Thomas Edwards gets green light for next phase of £200m revamp around Hertfordshire station
    300 homes have already been built at Bishop’s Stortford siteA £200m plan by Network Rail to turn an area around Bishop’s Stortford station in Hertfordshire into new homes and commercial space has been approved by the local council. Under the plans, more than 700 homes will be built at the former Goods Yard  The network operator has teamed up with Kier’s property arm to turn the area around the Goods Yard into more than 400 homes. Called Solum Regeneration, the pair have already built more than 300 homes a multi-storey car park after East Herts district council gave the original scheme the green light in 2018. The latest proposals, designed by Pollard Thomas Edwards, follow a masterplan that was endorsed by the council three years ago. The scheme  includes 423 homes, improved pedestrian links from the station to the town centre and upgrades to the station forecourt.  Solum Regeneration is a joint venture between Network Rail and Kier Property which has been set up to bring private investment into the rail network by generating funds from the development of under-used railway land.
    0 Commentaires 0 Parts