• Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    WWW.ILM.COM
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Commentarii 0 Distribuiri
  • NUS researchers 3D print self-powered photonic skin for underwater communication and safety

    Researchers from the National University of Singaporehave developed a 3D printed, self-powered mechanoluminescentphotonic skin designed for communication and safety monitoring in underwater environments. The stretchable device emits light in response to mechanical deformation, requires no external power source, and remains functional under conditions such as high salinity and extreme temperatures.
    The findings were published in Advanced Materials by Xiaolu Sun, Shaohua Ling, Zhihang Qin, Jinrun Zhou, Quangang Shi, Zhuangjian Liu, and Yu Jun Tan. The research was conducted at NUS and Singapore’s Agency for Science, Technology and Research.
    Schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials.
    3D printing stretchable light-emitting skins with auxetic geometry
    The photonic skin was produced using a 3D printing method called direct-ink-writing, which involves extruding a specially formulated ink through a fine nozzle to build up complex structures layer by layer. In this case, the ink was made by mixing tiny particles of zinc sulfide doped with copper, a material that glows when stretched, with a flexible silicone rubber. These particles serve as the active ingredient that lights up when the material is deformed, while the silicone acts as a soft, stretchable support structure.
    To make the device more adaptable to movement and curved surfaces, like human skin or underwater equipment, the researchers printed it using auxetic designs. Auxetic structures have a rare mechanical property known as a negative Poisson’s ratio. Unlike most materials, which become thinner when stretched, auxetic designs expand laterally under tension. This makes them ideal for conforming to curved or irregular surfaces, such as joints, flexible robots, or underwater gear, without wrinkling or detaching.
    Encapsulating the printed skin in a clear silicone layer further improves performance by distributing mechanical stress evenly. This prevents localized tearing and ensures that the light emission remains bright and uniform, even after 10,000 cycles of stretching and relaxing. In previous stretchable light-emitting devices, uneven stress often led to dimming, flickering, or early material failure.
    Mechanical and optical performance of encapsulated photonic skin across 10,000 stretch cycles. Image via Sun et al., Advanced Materials.
    Underwater signaling, robotics, and gas leak detection
    The team demonstrated multiple applications for the photonic skin. When integrated into wearable gloves, the skin enabled light-based Morse code communication through simple finger gestures. Bending one or more fingers activated the mechanoluminescence, emitting visible flashes that corresponded to messages such as “UP,” “OK,” or “SOS.” The system remained fully functional when submerged in cold water, simulating deep-sea conditions.
    In a separate test, the skin was applied to a gas tank mock-up to monitor for leaks. A pinhole defect was covered with the printed skin and sealed using stretchable tape. When pressurized air escaped through the leak, the localized mechanical force caused a bright cyan glow at the exact leak site, offering a passive, electronics-free alternative to conventional gas sensors.
    To test performance on soft and mobile platforms, the researchers also mounted the photonic skin onto a robotic fish. As the robot swam through water tanks at different temperatures, the skin continued to light up reliably, demonstrating its resilience and utility for marine robotics.
    Comparison of printed photonic skin structures with different geometries and their conformability to complex surfaces. Image via Sun et al., Advanced Materials.
    Toward electronics-free underwater communication
    While LEDs and optical fibers are widely used in underwater lighting systems, their dependence on rigid form factors and external power makes them unsuitable for dynamic, flexible applications. In contrast, the stretchable ML photonic skin developed by NUS researchers provides a self-powered, adaptable alternative for diver signaling, robotic inspection, and leak detection, potentially transforming the toolkit for underwater communication and safety systems.
    Future directions include enhanced sensory integration and robotic applications, as the team continues exploring robust photonic systems for extreme environments.
    Photonic skin integrated into gloves for Morse code signaling and applied to robotic fish and gas tanks for underwater safety monitoring. Image via Sun et al., Advanced Materials.
    The rise of 3D printed multifunctional materials
    The development of the photonic skin reflects a broader trend in additive manufacturing toward multifunctional materials, structures that serve more than a structural role. Researchers are increasingly using multimaterial 3D printing to embed sensing, actuation, and signaling functions directly into devices. For example, recent work by SUSTech and City University of Hong Kong on thick-panel origami structures showed how multimaterial printing can enable large, foldable systems with high strength and motion control. These and other advances, including conductive FDM processes and Lithoz’s multimaterial ceramic tools, mark a shift toward printing entire systems. The NUS photonic skin fits squarely within this movement, combining mechanical adaptability, environmental durability, and real-time optical output into a single printable form.
    Read the full article in Advanced Materials
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.
    You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey.
    Featured image shows a schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials.
    #nus #researchers #print #selfpowered #photonic
    NUS researchers 3D print self-powered photonic skin for underwater communication and safety
    Researchers from the National University of Singaporehave developed a 3D printed, self-powered mechanoluminescentphotonic skin designed for communication and safety monitoring in underwater environments. The stretchable device emits light in response to mechanical deformation, requires no external power source, and remains functional under conditions such as high salinity and extreme temperatures. The findings were published in Advanced Materials by Xiaolu Sun, Shaohua Ling, Zhihang Qin, Jinrun Zhou, Quangang Shi, Zhuangjian Liu, and Yu Jun Tan. The research was conducted at NUS and Singapore’s Agency for Science, Technology and Research. Schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials. 3D printing stretchable light-emitting skins with auxetic geometry The photonic skin was produced using a 3D printing method called direct-ink-writing, which involves extruding a specially formulated ink through a fine nozzle to build up complex structures layer by layer. In this case, the ink was made by mixing tiny particles of zinc sulfide doped with copper, a material that glows when stretched, with a flexible silicone rubber. These particles serve as the active ingredient that lights up when the material is deformed, while the silicone acts as a soft, stretchable support structure. To make the device more adaptable to movement and curved surfaces, like human skin or underwater equipment, the researchers printed it using auxetic designs. Auxetic structures have a rare mechanical property known as a negative Poisson’s ratio. Unlike most materials, which become thinner when stretched, auxetic designs expand laterally under tension. This makes them ideal for conforming to curved or irregular surfaces, such as joints, flexible robots, or underwater gear, without wrinkling or detaching. Encapsulating the printed skin in a clear silicone layer further improves performance by distributing mechanical stress evenly. This prevents localized tearing and ensures that the light emission remains bright and uniform, even after 10,000 cycles of stretching and relaxing. In previous stretchable light-emitting devices, uneven stress often led to dimming, flickering, or early material failure. Mechanical and optical performance of encapsulated photonic skin across 10,000 stretch cycles. Image via Sun et al., Advanced Materials. Underwater signaling, robotics, and gas leak detection The team demonstrated multiple applications for the photonic skin. When integrated into wearable gloves, the skin enabled light-based Morse code communication through simple finger gestures. Bending one or more fingers activated the mechanoluminescence, emitting visible flashes that corresponded to messages such as “UP,” “OK,” or “SOS.” The system remained fully functional when submerged in cold water, simulating deep-sea conditions. In a separate test, the skin was applied to a gas tank mock-up to monitor for leaks. A pinhole defect was covered with the printed skin and sealed using stretchable tape. When pressurized air escaped through the leak, the localized mechanical force caused a bright cyan glow at the exact leak site, offering a passive, electronics-free alternative to conventional gas sensors. To test performance on soft and mobile platforms, the researchers also mounted the photonic skin onto a robotic fish. As the robot swam through water tanks at different temperatures, the skin continued to light up reliably, demonstrating its resilience and utility for marine robotics. Comparison of printed photonic skin structures with different geometries and their conformability to complex surfaces. Image via Sun et al., Advanced Materials. Toward electronics-free underwater communication While LEDs and optical fibers are widely used in underwater lighting systems, their dependence on rigid form factors and external power makes them unsuitable for dynamic, flexible applications. In contrast, the stretchable ML photonic skin developed by NUS researchers provides a self-powered, adaptable alternative for diver signaling, robotic inspection, and leak detection, potentially transforming the toolkit for underwater communication and safety systems. Future directions include enhanced sensory integration and robotic applications, as the team continues exploring robust photonic systems for extreme environments. Photonic skin integrated into gloves for Morse code signaling and applied to robotic fish and gas tanks for underwater safety monitoring. Image via Sun et al., Advanced Materials. The rise of 3D printed multifunctional materials The development of the photonic skin reflects a broader trend in additive manufacturing toward multifunctional materials, structures that serve more than a structural role. Researchers are increasingly using multimaterial 3D printing to embed sensing, actuation, and signaling functions directly into devices. For example, recent work by SUSTech and City University of Hong Kong on thick-panel origami structures showed how multimaterial printing can enable large, foldable systems with high strength and motion control. These and other advances, including conductive FDM processes and Lithoz’s multimaterial ceramic tools, mark a shift toward printing entire systems. The NUS photonic skin fits squarely within this movement, combining mechanical adaptability, environmental durability, and real-time optical output into a single printable form. Read the full article in Advanced Materials Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey. Featured image shows a schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials. #nus #researchers #print #selfpowered #photonic
    3DPRINTINGINDUSTRY.COM
    NUS researchers 3D print self-powered photonic skin for underwater communication and safety
    Researchers from the National University of Singapore (NUS) have developed a 3D printed, self-powered mechanoluminescent (ML) photonic skin designed for communication and safety monitoring in underwater environments. The stretchable device emits light in response to mechanical deformation, requires no external power source, and remains functional under conditions such as high salinity and extreme temperatures. The findings were published in Advanced Materials by Xiaolu Sun, Shaohua Ling, Zhihang Qin, Jinrun Zhou, Quangang Shi, Zhuangjian Liu, and Yu Jun Tan. The research was conducted at NUS and Singapore’s Agency for Science, Technology and Research (A*STAR). Schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials. 3D printing stretchable light-emitting skins with auxetic geometry The photonic skin was produced using a 3D printing method called direct-ink-writing (DIW), which involves extruding a specially formulated ink through a fine nozzle to build up complex structures layer by layer. In this case, the ink was made by mixing tiny particles of zinc sulfide doped with copper (ZnS:Cu), a material that glows when stretched, with a flexible silicone rubber. These particles serve as the active ingredient that lights up when the material is deformed, while the silicone acts as a soft, stretchable support structure. To make the device more adaptable to movement and curved surfaces, like human skin or underwater equipment, the researchers printed it using auxetic designs. Auxetic structures have a rare mechanical property known as a negative Poisson’s ratio. Unlike most materials, which become thinner when stretched, auxetic designs expand laterally under tension. This makes them ideal for conforming to curved or irregular surfaces, such as joints, flexible robots, or underwater gear, without wrinkling or detaching. Encapsulating the printed skin in a clear silicone layer further improves performance by distributing mechanical stress evenly. This prevents localized tearing and ensures that the light emission remains bright and uniform, even after 10,000 cycles of stretching and relaxing. In previous stretchable light-emitting devices, uneven stress often led to dimming, flickering, or early material failure. Mechanical and optical performance of encapsulated photonic skin across 10,000 stretch cycles. Image via Sun et al., Advanced Materials. Underwater signaling, robotics, and gas leak detection The team demonstrated multiple applications for the photonic skin. When integrated into wearable gloves, the skin enabled light-based Morse code communication through simple finger gestures. Bending one or more fingers activated the mechanoluminescence, emitting visible flashes that corresponded to messages such as “UP,” “OK,” or “SOS.” The system remained fully functional when submerged in cold water (~7°C), simulating deep-sea conditions. In a separate test, the skin was applied to a gas tank mock-up to monitor for leaks. A pinhole defect was covered with the printed skin and sealed using stretchable tape. When pressurized air escaped through the leak, the localized mechanical force caused a bright cyan glow at the exact leak site, offering a passive, electronics-free alternative to conventional gas sensors. To test performance on soft and mobile platforms, the researchers also mounted the photonic skin onto a robotic fish. As the robot swam through water tanks at different temperatures (24°C, 50°C, and 7°C), the skin continued to light up reliably, demonstrating its resilience and utility for marine robotics. Comparison of printed photonic skin structures with different geometries and their conformability to complex surfaces. Image via Sun et al., Advanced Materials. Toward electronics-free underwater communication While LEDs and optical fibers are widely used in underwater lighting systems, their dependence on rigid form factors and external power makes them unsuitable for dynamic, flexible applications. In contrast, the stretchable ML photonic skin developed by NUS researchers provides a self-powered, adaptable alternative for diver signaling, robotic inspection, and leak detection, potentially transforming the toolkit for underwater communication and safety systems. Future directions include enhanced sensory integration and robotic applications, as the team continues exploring robust photonic systems for extreme environments. Photonic skin integrated into gloves for Morse code signaling and applied to robotic fish and gas tanks for underwater safety monitoring. Image via Sun et al., Advanced Materials. The rise of 3D printed multifunctional materials The development of the photonic skin reflects a broader trend in additive manufacturing toward multifunctional materials, structures that serve more than a structural role. Researchers are increasingly using multimaterial 3D printing to embed sensing, actuation, and signaling functions directly into devices. For example, recent work by SUSTech and City University of Hong Kong on thick-panel origami structures showed how multimaterial printing can enable large, foldable systems with high strength and motion control. These and other advances, including conductive FDM processes and Lithoz’s multimaterial ceramic tools, mark a shift toward printing entire systems. The NUS photonic skin fits squarely within this movement, combining mechanical adaptability, environmental durability, and real-time optical output into a single printable form. Read the full article in Advanced Materials Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey. Featured image shows a schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials.
    Like
    Love
    Wow
    Sad
    Angry
    192
    0 Commentarii 0 Distribuiri
  • Analysis of job vacancies shows earnings boost for AI skills

    Looker_Studio - stock.adobe.com

    News

    Analysis of job vacancies shows earnings boost for AI skills
    Even when parts of a job are being automated, those who know how to work with artificial intelligence tools can expect higher salaries

    By

    Cliff Saran,
    Managing Editor

    Published: 03 Jun 2025 7:00

    UK workers with skills in artificial intelligenceappear to earn 11% more on average, even in sectors where AI is automating parts of their existing job functions.
    Workers in sectors exposed to AI, where the technology can be deployed for some tasks, are more productive and command higher salaries, according to PwC’s 2025 Global AI Jobs Barometer. The study, which was based on an analysis of almost one billion job adverts, found that wages are rising twice as fast in industries most exposed to AI.
    From a skills perspective, PwC reported that AI is changing the skills required of job applicants. According to PwC, to succeed in the workplace, candidates are more likely to need experience in using AI tools and the ability to demonstrate critical thinking and collaboration.
    Phillippa O’Connor, chief people officer at PwC UK, noted that while degrees are still important for many jobs, a reduction in degree requirements suggests employers are looking at a broader range of measures to assess skills and potential.
    In occupations most exposed to AI, PwC noted that the skills sought by employers are changing 59% faster than in occupations least exposed to AI. “AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others,” O’Connor added.
    Those with the right AI skills are being rewarded with higher salaries. In fact, PwC found that wages are growing twice as fast in AI-exposed industries. This includes jobs that are classed as “automatable”, which means they contain some tasks that can readily be automated. The highest premiums are attached to occupations requiring AI skills, with an average premium in 2024 of 11% for UK workers in these roles.  

    AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others

    Phillippa O’Connor PwC UK

    PwC’s analysis shows that sectors exposed to AI experience three times higher growth in the revenue generated by each employee. It also reported that growth in revenue per employee for AI-exposed industries surged when large language modelssuch as generative AIbecame mainstream.
    Revenue growth per employee has nearly quadrupled in industries most exposed to AI, such as software, rising from 7% between 2018 and 2022, to 27% between 2018 and 2024. In contrast, revenue growth per employee in industries least exposed to AI, such as mining and hospitality, fell slightly, from 10% between 2018 and 2022, to 9% between 2018 and 2024.
    However, since 2018, job postings for occupations with greater exposure to AI have grown at a slower pace than those with lower exposure – and this gap is widening.
    Umang Paw, chief technology officerat PwC UK, said: “There are still many unknowns about AI’s potential. AI can provide stardust to those ready to adapt, but risks leaving others behind.”
    Paw believes there needs to be a concerted effort to expand access to technology and training to ensure the benefits of AI are widely shared.
    “In the intelligence age, the fusion of AI with technologies like real-time data analytics – and businesses broadening their products and services – will create new industries and fresh job opportunities,” Paw added.

    about AI skills

    AWS addresses the skills barrier holding back enterprises: The AWS Summit in London saw the public cloud giant appoint itself to take on the task of skilling up hundreds of thousands of UK people in using AI technologies.
    Could generative AI help to fill the skills gap in engineering: The role of artificial intelligence and machine learning in society continues to be hotly debated as the tools promise to revolutionise our lives, but how will they affect the engineering sector?

    In The Current Issue:

    UK government outlines plan to surveil migrants with eVisa data
    Why we must reform the Computer Misuse Act: A cyber pro speaks out

    Download Current Issue

    What to expect from Aera Technology AeraHUB 25
    – CW Developer Network

    NTT IOWN all-photonics ‘saves Princess Miku’ from dragon
    – CW Developer Network

    View All Blogs
    #analysis #job #vacancies #shows #earnings
    Analysis of job vacancies shows earnings boost for AI skills
    Looker_Studio - stock.adobe.com News Analysis of job vacancies shows earnings boost for AI skills Even when parts of a job are being automated, those who know how to work with artificial intelligence tools can expect higher salaries By Cliff Saran, Managing Editor Published: 03 Jun 2025 7:00 UK workers with skills in artificial intelligenceappear to earn 11% more on average, even in sectors where AI is automating parts of their existing job functions. Workers in sectors exposed to AI, where the technology can be deployed for some tasks, are more productive and command higher salaries, according to PwC’s 2025 Global AI Jobs Barometer. The study, which was based on an analysis of almost one billion job adverts, found that wages are rising twice as fast in industries most exposed to AI. From a skills perspective, PwC reported that AI is changing the skills required of job applicants. According to PwC, to succeed in the workplace, candidates are more likely to need experience in using AI tools and the ability to demonstrate critical thinking and collaboration. Phillippa O’Connor, chief people officer at PwC UK, noted that while degrees are still important for many jobs, a reduction in degree requirements suggests employers are looking at a broader range of measures to assess skills and potential. In occupations most exposed to AI, PwC noted that the skills sought by employers are changing 59% faster than in occupations least exposed to AI. “AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others,” O’Connor added. Those with the right AI skills are being rewarded with higher salaries. In fact, PwC found that wages are growing twice as fast in AI-exposed industries. This includes jobs that are classed as “automatable”, which means they contain some tasks that can readily be automated. The highest premiums are attached to occupations requiring AI skills, with an average premium in 2024 of 11% for UK workers in these roles.   AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others Phillippa O’Connor PwC UK PwC’s analysis shows that sectors exposed to AI experience three times higher growth in the revenue generated by each employee. It also reported that growth in revenue per employee for AI-exposed industries surged when large language modelssuch as generative AIbecame mainstream. Revenue growth per employee has nearly quadrupled in industries most exposed to AI, such as software, rising from 7% between 2018 and 2022, to 27% between 2018 and 2024. In contrast, revenue growth per employee in industries least exposed to AI, such as mining and hospitality, fell slightly, from 10% between 2018 and 2022, to 9% between 2018 and 2024. However, since 2018, job postings for occupations with greater exposure to AI have grown at a slower pace than those with lower exposure – and this gap is widening. Umang Paw, chief technology officerat PwC UK, said: “There are still many unknowns about AI’s potential. AI can provide stardust to those ready to adapt, but risks leaving others behind.” Paw believes there needs to be a concerted effort to expand access to technology and training to ensure the benefits of AI are widely shared. “In the intelligence age, the fusion of AI with technologies like real-time data analytics – and businesses broadening their products and services – will create new industries and fresh job opportunities,” Paw added. about AI skills AWS addresses the skills barrier holding back enterprises: The AWS Summit in London saw the public cloud giant appoint itself to take on the task of skilling up hundreds of thousands of UK people in using AI technologies. Could generative AI help to fill the skills gap in engineering: The role of artificial intelligence and machine learning in society continues to be hotly debated as the tools promise to revolutionise our lives, but how will they affect the engineering sector? In The Current Issue: UK government outlines plan to surveil migrants with eVisa data Why we must reform the Computer Misuse Act: A cyber pro speaks out Download Current Issue What to expect from Aera Technology AeraHUB 25 – CW Developer Network NTT IOWN all-photonics ‘saves Princess Miku’ from dragon – CW Developer Network View All Blogs #analysis #job #vacancies #shows #earnings
    WWW.COMPUTERWEEKLY.COM
    Analysis of job vacancies shows earnings boost for AI skills
    Looker_Studio - stock.adobe.com News Analysis of job vacancies shows earnings boost for AI skills Even when parts of a job are being automated, those who know how to work with artificial intelligence tools can expect higher salaries By Cliff Saran, Managing Editor Published: 03 Jun 2025 7:00 UK workers with skills in artificial intelligence (AI) appear to earn 11% more on average, even in sectors where AI is automating parts of their existing job functions. Workers in sectors exposed to AI, where the technology can be deployed for some tasks, are more productive and command higher salaries, according to PwC’s 2025 Global AI Jobs Barometer. The study, which was based on an analysis of almost one billion job adverts, found that wages are rising twice as fast in industries most exposed to AI. From a skills perspective, PwC reported that AI is changing the skills required of job applicants. According to PwC, to succeed in the workplace, candidates are more likely to need experience in using AI tools and the ability to demonstrate critical thinking and collaboration. Phillippa O’Connor, chief people officer at PwC UK, noted that while degrees are still important for many jobs, a reduction in degree requirements suggests employers are looking at a broader range of measures to assess skills and potential. In occupations most exposed to AI, PwC noted that the skills sought by employers are changing 59% faster than in occupations least exposed to AI. “AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others,” O’Connor added. Those with the right AI skills are being rewarded with higher salaries. In fact, PwC found that wages are growing twice as fast in AI-exposed industries. This includes jobs that are classed as “automatable”, which means they contain some tasks that can readily be automated. The highest premiums are attached to occupations requiring AI skills, with an average premium in 2024 of 11% for UK workers in these roles.   AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others Phillippa O’Connor PwC UK PwC’s analysis shows that sectors exposed to AI experience three times higher growth in the revenue generated by each employee. It also reported that growth in revenue per employee for AI-exposed industries surged when large language models (LLMs) such as generative AI (GenAI) became mainstream. Revenue growth per employee has nearly quadrupled in industries most exposed to AI, such as software, rising from 7% between 2018 and 2022, to 27% between 2018 and 2024. In contrast, revenue growth per employee in industries least exposed to AI, such as mining and hospitality, fell slightly, from 10% between 2018 and 2022, to 9% between 2018 and 2024. However, since 2018, job postings for occupations with greater exposure to AI have grown at a slower pace than those with lower exposure – and this gap is widening. Umang Paw, chief technology officer (CTO) at PwC UK, said: “There are still many unknowns about AI’s potential. AI can provide stardust to those ready to adapt, but risks leaving others behind.” Paw believes there needs to be a concerted effort to expand access to technology and training to ensure the benefits of AI are widely shared. “In the intelligence age, the fusion of AI with technologies like real-time data analytics – and businesses broadening their products and services – will create new industries and fresh job opportunities,” Paw added. Read more about AI skills AWS addresses the skills barrier holding back enterprises: The AWS Summit in London saw the public cloud giant appoint itself to take on the task of skilling up hundreds of thousands of UK people in using AI technologies. Could generative AI help to fill the skills gap in engineering: The role of artificial intelligence and machine learning in society continues to be hotly debated as the tools promise to revolutionise our lives, but how will they affect the engineering sector? In The Current Issue: UK government outlines plan to surveil migrants with eVisa data Why we must reform the Computer Misuse Act: A cyber pro speaks out Download Current Issue What to expect from Aera Technology AeraHUB 25 – CW Developer Network NTT IOWN all-photonics ‘saves Princess Miku’ from dragon – CW Developer Network View All Blogs
    0 Commentarii 0 Distribuiri
  • Get A 3D Printer Capable Of Making 7-Inch Resin Figures For Only $160 At Amazon

    Anycubic Photon Mono 4 - 3D Printer for 6" Resin Models| Get free bottle of resin Get deal Add freebie to your cart If you're curious about 3D printers but don't want to spend upwards offor a fancy name-brand model, Amazon has a few enticing offers for Prime members to check out. For a limited time, members can get an entry-level 3D printer for as low as For those who simply want to dabble with the hobby by printing small monochrome figures and accessories, the deals below are worth considering. Continue Reading at GameSpot
    #get #printer #capable #making #7inch
    Get A 3D Printer Capable Of Making 7-Inch Resin Figures For Only $160 At Amazon
    Anycubic Photon Mono 4 - 3D Printer for 6" Resin Models| Get free bottle of resin Get deal Add freebie to your cart If you're curious about 3D printers but don't want to spend upwards offor a fancy name-brand model, Amazon has a few enticing offers for Prime members to check out. For a limited time, members can get an entry-level 3D printer for as low as For those who simply want to dabble with the hobby by printing small monochrome figures and accessories, the deals below are worth considering. Continue Reading at GameSpot #get #printer #capable #making #7inch
    WWW.GAMESPOT.COM
    Get A 3D Printer Capable Of Making 7-Inch Resin Figures For Only $160 At Amazon
    Anycubic Photon Mono 4 - 3D Printer for 6" Resin Models $160 (was $250) | Get free bottle of resin Get deal at Amazon Add freebie to your cart If you're curious about 3D printers but don't want to spend upwards of $1,000 (or more) for a fancy name-brand model, Amazon has a few enticing offers for Prime members to check out. For a limited time, members can get an entry-level 3D printer for as low as $160. For those who simply want to dabble with the hobby by printing small monochrome figures and accessories, the deals below are worth considering. Continue Reading at GameSpot
    0 Commentarii 0 Distribuiri
  • Bioprinted organs ‘10–15 years away,’ says startup regenerating dog skin

    Human organs could be bioprinted for transplants within 10 years, according to Lithuanian startup Vital3D. But before reaching human hearts and kidneys, the company is starting with something simpler: regenerating dog skin.
    Based in Vilnius, Vital3D is already bioprinting functional tissue constructs. Using a proprietary laser system, the startup deposits living cells and biomaterials in precise 3D patterns. The structures mimic natural biological systems — and could one day form entire organs tailored to a patient’s unique anatomy.
    That mission is both professional and personal for CEO Vidmantas Šakalys. After losing a mentor to urinary cancer, he set out to develop 3D-printed kidneys that could save others from the same fate. But before reaching that goal, the company needs a commercial product to fund the long road ahead.
    That product is VitalHeal — the first-ever bioprinted wound patch for pets. Dogs are the initial target, with human applications slated to follow.
    Šakalys calls the patch “a first step” towards bioprinted kidneys. “Printing organs for transplantation is a really challenging task,” he tells TNW after a tour of his lab. “It’s 10 or 15 years away from now, and as a commercial entity, we need to have commercially available products earlier. So we start with simpler products and then move into more difficult ones.”
    Register Now

    The path may be simpler, but the technology is anything but.
    Bioprinting goes to the vet
    VitalHeal is embedded with growth factors that accelerate skin regeneration.
    Across the patch’s surface, tiny pores about one-fifth the width of a human hair enable air circulation while blocking bacteria. Once applied, VitalHeal seals the wound and maintains constant pressure while the growth factors get to work.
    According to Vital3D, the patch can reduce healing time from 10–12 weeks to just four to six. Infection risk can drop from 30% to under 10%, vet visits from eight to two or three, and surgery times by half.
    Current treatments, the startup argues, can be costly, ineffective, and distressing for animals. VitalHeal is designed to provide a safer, faster, and cheaper alternative.
    Vital3D says the market is big — and the data backs up the claim.
    Vital3D’s FemtoBrush system promises high-speed and high-precision bioprinting. Credit: Vital3D
    Commercial prospects
    The global animal wound care market is projected to grow from bnin 2024 to bnby 2030, fuelled by rising pet ownership and demand for advanced veterinary care. Vital3D forecasts an initial serviceable addressable marketof €76.5mn across the EU and US. By 2027-2028, the company aims to sell 100,000 units.
    Dogs are a logical starting point. Their size, activity levels, and surgeries raise their risk of wounds. Around half of dogs over age 10 are also affected by cancer, further increasing demand for effective wound care.
    At €300 retail, the patches won’t be cheap. But Vital3D claims they could slash treatment costs for pet owners from €3,000 to €1,500. Production at scale is expected to bring prices down further. 
    After strong results in rats, trials on dogs will begin this summer in clinics in Lithuania and the UK — Vital3D’s pilot markets.
    If all goes to plan, a non-degradable patch will launch in Europe next year. The company will then progress to a biodegradable version.
    From there, the company plans to adapt the tech for humans. The initial focus will be wound care for people with diabetes, 25% of whom suffer from impaired healing. Future versions could support burn victims, injured soldiers, and others in need of advanced skin restoration.
    Freshly printed fluids in a bio-ink droplet. Credit: Vital3D
    Vital3D is also exploring other medical frontiers. In partnership with Lithuania’s National Cancer Institute, the startup is building organoids — mini versions of organs — for cancer drug testing. Another project involves bioprinted stents, which are showing promise in early animal trials. But all these efforts serve a bigger mission.
    “Our final target is to move to organ printing for transplants,” says Šakalys.
    Bioprinting organs
    A computer engineer by training, Šakalys has worked with photonic innovations for over 10 years. 
    At his previous startup, Femtika, he harnessed lasers to produce tiny components for microelectronics, medical devices, and aerospace engineering. He realised they could also enable precise bioprinting. 
    In 2021, he co-founded Vital3D to advance the concept. The company’s printing system directs light towards a photosensitive bio-ink. The material is hardened and formed into a structure, with living cells and biomaterials moulded into intricate 3D patterns.
    The shape of the laser beam can be adjusted to replicate complex biological forms — potentially even entire organs.
    But there are still major scientific hurdles to overcome. One is vascularisation, the formation of blood vessels in intricate networks. Another is the diverse variety of cell types in many organs. Replicating these sophisticated natural structures will be challenging.
    “First of all, we want to solve the vasculature. Then we will go into the differentiation of cells,” Šakalys says.
    “Our target is to see if we can print from fewer cells, but try to differentiate them while printing into different types of cells.” 
    If successful, Vital3D could help ease the global shortage of transplantable organs. Fewer than 10% of patients who need a transplant receive one each year, according to the World Health Organisation. In the US alone, around 90,000 people are waiting for a kidney — a shortfall that’s fuelling a thriving black market.
    Šakalys believes that could be just the start. He envisions bioprinting not just creating organs, but also advancing a new era of personalised medicine.
    “It can bring a lot of benefits to society,” he says. “Not just bioprinting for transplants, but also tissue engineering as well.”
    Want to discover the next big thing in tech? Then take a trip to TNW Conference, where thousands of founders, investors, and corporate innovators will share their ideas. The event takes place on June 19–20 in Amsterdam and tickets are on sale now. Use the code TNWXMEDIA2025 at the checkout to get 30% off.

    Story by

    Thomas Macaulay

    Managing editor

    Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he eThomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chessand the guitar.

    Get the TNW newsletter
    Get the most important tech news in your inbox each week.

    Also tagged with
    #bioprinted #organs #years #away #says
    Bioprinted organs ‘10–15 years away,’ says startup regenerating dog skin
    Human organs could be bioprinted for transplants within 10 years, according to Lithuanian startup Vital3D. But before reaching human hearts and kidneys, the company is starting with something simpler: regenerating dog skin. Based in Vilnius, Vital3D is already bioprinting functional tissue constructs. Using a proprietary laser system, the startup deposits living cells and biomaterials in precise 3D patterns. The structures mimic natural biological systems — and could one day form entire organs tailored to a patient’s unique anatomy. That mission is both professional and personal for CEO Vidmantas Šakalys. After losing a mentor to urinary cancer, he set out to develop 3D-printed kidneys that could save others from the same fate. But before reaching that goal, the company needs a commercial product to fund the long road ahead. That product is VitalHeal — the first-ever bioprinted wound patch for pets. Dogs are the initial target, with human applications slated to follow. Šakalys calls the patch “a first step” towards bioprinted kidneys. “Printing organs for transplantation is a really challenging task,” he tells TNW after a tour of his lab. “It’s 10 or 15 years away from now, and as a commercial entity, we need to have commercially available products earlier. So we start with simpler products and then move into more difficult ones.” Register Now The path may be simpler, but the technology is anything but. Bioprinting goes to the vet VitalHeal is embedded with growth factors that accelerate skin regeneration. Across the patch’s surface, tiny pores about one-fifth the width of a human hair enable air circulation while blocking bacteria. Once applied, VitalHeal seals the wound and maintains constant pressure while the growth factors get to work. According to Vital3D, the patch can reduce healing time from 10–12 weeks to just four to six. Infection risk can drop from 30% to under 10%, vet visits from eight to two or three, and surgery times by half. Current treatments, the startup argues, can be costly, ineffective, and distressing for animals. VitalHeal is designed to provide a safer, faster, and cheaper alternative. Vital3D says the market is big — and the data backs up the claim. Vital3D’s FemtoBrush system promises high-speed and high-precision bioprinting. Credit: Vital3D Commercial prospects The global animal wound care market is projected to grow from bnin 2024 to bnby 2030, fuelled by rising pet ownership and demand for advanced veterinary care. Vital3D forecasts an initial serviceable addressable marketof €76.5mn across the EU and US. By 2027-2028, the company aims to sell 100,000 units. Dogs are a logical starting point. Their size, activity levels, and surgeries raise their risk of wounds. Around half of dogs over age 10 are also affected by cancer, further increasing demand for effective wound care. At €300 retail, the patches won’t be cheap. But Vital3D claims they could slash treatment costs for pet owners from €3,000 to €1,500. Production at scale is expected to bring prices down further.  After strong results in rats, trials on dogs will begin this summer in clinics in Lithuania and the UK — Vital3D’s pilot markets. If all goes to plan, a non-degradable patch will launch in Europe next year. The company will then progress to a biodegradable version. From there, the company plans to adapt the tech for humans. The initial focus will be wound care for people with diabetes, 25% of whom suffer from impaired healing. Future versions could support burn victims, injured soldiers, and others in need of advanced skin restoration. Freshly printed fluids in a bio-ink droplet. Credit: Vital3D Vital3D is also exploring other medical frontiers. In partnership with Lithuania’s National Cancer Institute, the startup is building organoids — mini versions of organs — for cancer drug testing. Another project involves bioprinted stents, which are showing promise in early animal trials. But all these efforts serve a bigger mission. “Our final target is to move to organ printing for transplants,” says Šakalys. Bioprinting organs A computer engineer by training, Šakalys has worked with photonic innovations for over 10 years.  At his previous startup, Femtika, he harnessed lasers to produce tiny components for microelectronics, medical devices, and aerospace engineering. He realised they could also enable precise bioprinting.  In 2021, he co-founded Vital3D to advance the concept. The company’s printing system directs light towards a photosensitive bio-ink. The material is hardened and formed into a structure, with living cells and biomaterials moulded into intricate 3D patterns. The shape of the laser beam can be adjusted to replicate complex biological forms — potentially even entire organs. But there are still major scientific hurdles to overcome. One is vascularisation, the formation of blood vessels in intricate networks. Another is the diverse variety of cell types in many organs. Replicating these sophisticated natural structures will be challenging. “First of all, we want to solve the vasculature. Then we will go into the differentiation of cells,” Šakalys says. “Our target is to see if we can print from fewer cells, but try to differentiate them while printing into different types of cells.”  If successful, Vital3D could help ease the global shortage of transplantable organs. Fewer than 10% of patients who need a transplant receive one each year, according to the World Health Organisation. In the US alone, around 90,000 people are waiting for a kidney — a shortfall that’s fuelling a thriving black market. Šakalys believes that could be just the start. He envisions bioprinting not just creating organs, but also advancing a new era of personalised medicine. “It can bring a lot of benefits to society,” he says. “Not just bioprinting for transplants, but also tissue engineering as well.” Want to discover the next big thing in tech? Then take a trip to TNW Conference, where thousands of founders, investors, and corporate innovators will share their ideas. The event takes place on June 19–20 in Amsterdam and tickets are on sale now. Use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Thomas Macaulay Managing editor Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he eThomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chessand the guitar. Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with #bioprinted #organs #years #away #says
    THENEXTWEB.COM
    Bioprinted organs ‘10–15 years away,’ says startup regenerating dog skin
    Human organs could be bioprinted for transplants within 10 years, according to Lithuanian startup Vital3D. But before reaching human hearts and kidneys, the company is starting with something simpler: regenerating dog skin. Based in Vilnius, Vital3D is already bioprinting functional tissue constructs. Using a proprietary laser system, the startup deposits living cells and biomaterials in precise 3D patterns. The structures mimic natural biological systems — and could one day form entire organs tailored to a patient’s unique anatomy. That mission is both professional and personal for CEO Vidmantas Šakalys. After losing a mentor to urinary cancer, he set out to develop 3D-printed kidneys that could save others from the same fate. But before reaching that goal, the company needs a commercial product to fund the long road ahead. That product is VitalHeal — the first-ever bioprinted wound patch for pets. Dogs are the initial target, with human applications slated to follow. Šakalys calls the patch “a first step” towards bioprinted kidneys. “Printing organs for transplantation is a really challenging task,” he tells TNW after a tour of his lab. “It’s 10 or 15 years away from now, and as a commercial entity, we need to have commercially available products earlier. So we start with simpler products and then move into more difficult ones.” Register Now The path may be simpler, but the technology is anything but. Bioprinting goes to the vet VitalHeal is embedded with growth factors that accelerate skin regeneration. Across the patch’s surface, tiny pores about one-fifth the width of a human hair enable air circulation while blocking bacteria. Once applied, VitalHeal seals the wound and maintains constant pressure while the growth factors get to work. According to Vital3D, the patch can reduce healing time from 10–12 weeks to just four to six. Infection risk can drop from 30% to under 10%, vet visits from eight to two or three, and surgery times by half. Current treatments, the startup argues, can be costly, ineffective, and distressing for animals. VitalHeal is designed to provide a safer, faster, and cheaper alternative. Vital3D says the market is big — and the data backs up the claim. Vital3D’s FemtoBrush system promises high-speed and high-precision bioprinting. Credit: Vital3D Commercial prospects The global animal wound care market is projected to grow from $1.4bn (€1.24bn) in 2024 to $2.1bn (€1.87bn) by 2030, fuelled by rising pet ownership and demand for advanced veterinary care. Vital3D forecasts an initial serviceable addressable market (ISAM) of €76.5mn across the EU and US. By 2027-2028, the company aims to sell 100,000 units. Dogs are a logical starting point. Their size, activity levels, and surgeries raise their risk of wounds. Around half of dogs over age 10 are also affected by cancer, further increasing demand for effective wound care. At €300 retail (or €150 wholesale), the patches won’t be cheap. But Vital3D claims they could slash treatment costs for pet owners from €3,000 to €1,500. Production at scale is expected to bring prices down further.  After strong results in rats, trials on dogs will begin this summer in clinics in Lithuania and the UK — Vital3D’s pilot markets. If all goes to plan, a non-degradable patch will launch in Europe next year. The company will then progress to a biodegradable version. From there, the company plans to adapt the tech for humans. The initial focus will be wound care for people with diabetes, 25% of whom suffer from impaired healing. Future versions could support burn victims, injured soldiers, and others in need of advanced skin restoration. Freshly printed fluids in a bio-ink droplet. Credit: Vital3D Vital3D is also exploring other medical frontiers. In partnership with Lithuania’s National Cancer Institute, the startup is building organoids — mini versions of organs — for cancer drug testing. Another project involves bioprinted stents, which are showing promise in early animal trials. But all these efforts serve a bigger mission. “Our final target is to move to organ printing for transplants,” says Šakalys. Bioprinting organs A computer engineer by training, Šakalys has worked with photonic innovations for over 10 years.  At his previous startup, Femtika, he harnessed lasers to produce tiny components for microelectronics, medical devices, and aerospace engineering. He realised they could also enable precise bioprinting.  In 2021, he co-founded Vital3D to advance the concept. The company’s printing system directs light towards a photosensitive bio-ink. The material is hardened and formed into a structure, with living cells and biomaterials moulded into intricate 3D patterns. The shape of the laser beam can be adjusted to replicate complex biological forms — potentially even entire organs. But there are still major scientific hurdles to overcome. One is vascularisation, the formation of blood vessels in intricate networks. Another is the diverse variety of cell types in many organs. Replicating these sophisticated natural structures will be challenging. “First of all, we want to solve the vasculature. Then we will go into the differentiation of cells,” Šakalys says. “Our target is to see if we can print from fewer cells, but try to differentiate them while printing into different types of cells.”  If successful, Vital3D could help ease the global shortage of transplantable organs. Fewer than 10% of patients who need a transplant receive one each year, according to the World Health Organisation. In the US alone, around 90,000 people are waiting for a kidney — a shortfall that’s fuelling a thriving black market. Šakalys believes that could be just the start. He envisions bioprinting not just creating organs, but also advancing a new era of personalised medicine. “It can bring a lot of benefits to society,” he says. “Not just bioprinting for transplants, but also tissue engineering as well.” Want to discover the next big thing in tech? Then take a trip to TNW Conference, where thousands of founders, investors, and corporate innovators will share their ideas. The event takes place on June 19–20 in Amsterdam and tickets are on sale now. Use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Thomas Macaulay Managing editor Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he e (show all) Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chess (badly) and the guitar (even worse). Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    0 Commentarii 0 Distribuiri
  • Rethinking secure comms: Are encrypted platforms still enough?

    Maksim Kabakou - Fotolia

    Opinion

    Rethinking secure comms: Are encrypted platforms still enough?
    A leak of information on American military operations caused a major political incident in March 2025. The Security Think Tank considers what can CISOs can learn from this potentially fatal error.

    By

    Russell Auld, PAC

    Published: 30 May 2025

    In today’s constantly changing cyber landscape, answering the question “what does best practice now look like?” is far from simple. While emerging technologies and AI-driven security tools continue to make the headlines and become the topics of discussion, the real pivot point for modern security lies not just in the technological advancements but in context, people and process. 
    The recent Signal messaging platform incident in which a journalist was mistakenly added to a group chat, exposing sensitive information, serves as a timely reminder that even the most secure platform is vulnerable to human error. The platform wasn’t breached by malicious actors, or a zero-day exploit being utilised or the end-to-end encryption failing; the shortfall here was likely poorly defined acceptable use polices and controls alongside a lack of training and awareness.
    This incident, if nothing else, highlights a critical truth within cyber security – security tools are only as good as the environment, policies, and people operating them. While it’s tempting to focus on implementing more technical controls to prevent this from happening again, the reality is that many incidents result from a failure of process, governance, or awareness. 
    What does good security look like today? Some key areas include:

    Context over features, for example, whether Signal should have been used in the first place;
    There is no such thing as a silver bullet approach to protect your organisation;
    The importance of your team’s training and education;
    Reviewing and adapting continuously. 

    Security must be context-driven. Business leaders need to consider what their key area of concern is – reputational risk, state-sponsored surveillance, insider threats, or regulatory compliance. Each threat vector requires a different set of controls. For example, an organisation handling official-sensitive or classified data will require not just encryption, but assured platforms, robust access controls, identity validation, and clear auditability.
    Conversely, a commercial enterprise concerned about intellectual property leakage might strategically focus on user training, data loss prevention, and device control. Best practice isn’t picking the platform with the cheapest price tag or the most commonly used; it’s selecting a platform that supports the controls and policies required based on a deep understanding of your specific risks and use cases.  
    There is no one-size-fits-all solution for your organisation. The security product landscape is filled with vendors offering overlapping solutions that all claim to provide more protection than the other. And, although we know some potentially do offer better protection, features or functionality, even the best tool will fail if used incorrectly or implemented without a clear understanding of its limitations. Worse, organisations may gain a false sense of security by relying solely on a supplier’s claims. The priority must be to assess your organisation’s internal capability to manage and operate these tools effectively. Reassessing the threat landscape and taking advantage of the wealth of threat intelligence tools available, helps ensure you have the right skills, policies, and processes in place. 

    The Computer Weekly Security Think Tank on Signalgate

    Todd Thiemann, ESG: Signalgate: Learnings for CISOs securing enterprise data.
    Javvad Malik, KnowBe4: What CISOs can learn from Signalgate.
    Aditya Sood, Aryaka: Unspoken risk: Human factors undermine trusted platforms.
    Raihan Islam, defineXTEND: When leaders ignore cyber security rules, the whole system weakens.
    Elliot Wilkes, ACDS: Security vs. usability: Why rogue corporate comms are still an issue.
    Mike Gillespie and Ellie Hurst, Advent IM: Signalgate is a signal to revisit security onboarding and training.

    Best practice in 2025 means recognising that many security incidents stem from simple human mistakes, misaddressed emails, poor password hygiene, or even sharing access with the wrong person. Investing in continual staff education, security awareness, and skills gap analysis is essential to risk reduction.  
    This doesn’t mean showing an annual 10-minute cyber awareness video; you need to identify what will motivate your people and run security campaigns that capture their attention and change behaviour. For example you could consider using engaging nudges such as mandatory phishing alerts on laptops, interactive lock screen campaigns, and quizzes on key policies such as acceptable use and password complexity. Incorporate gamification elements, for example rewards for completing quizzes, and timely reminders to reinforce security best practices and fostering a culture of vigilance. 
    These campaigns should be a mixture of communications that engage people coupled with training which is seen as relevant by the workforce, as well as meeting role specific needs. Your developers need to understand secure coding practices, while those in front line operations may need training in how to detect phishing or social engineering attacks. In doing so this helps to create a better security culture within the organisation and enhance your overall security posture. 
    Finally, what’s considered “best practice” today may be outdated by tomorrow. Threats are constantly evolving, regulations change, and your own business operations and strategy may shift. Adopting a cyber security lifecycle that encompasses people, process and technology, supported by business continuous improvement activities and a clear vision from senior stakeholders will be vital. Conducting regular security reviews, red-teaming, and reassessing governance and policies will help ensure that defences remain relevant and proportional to your organisation’s threats.
    Encryption, however, still matters. As do SSO, MFA, secure coding practises, and access controls. But the real cornerstone of best practice in today’s cyber world is understanding why you need them, and how they’ll be used in practice. Securing your organisation is no longer just about picking the best platform, it's about creating a holistic view that incorporates people, process, and technology. And that may be the most secure approach, after all. 
    Russell Auld is digital trust and cyber security expert at PA Consulting

    In The Current Issue:

    UK government outlines plan to surveil migrants with eVisa data
    Why we must reform the Computer Misuse Act: A cyber pro speaks out

    Download Current Issue

    NTT IOWN all-photonics ‘saves Princess Miku’ from dragon
    – CW Developer Network

    FinOps Foundation lays down 2025 Framework for Cloud+ cost control
    – Open Source Insider

    View All Blogs
    #rethinking #secure #comms #are #encrypted
    Rethinking secure comms: Are encrypted platforms still enough?
    Maksim Kabakou - Fotolia Opinion Rethinking secure comms: Are encrypted platforms still enough? A leak of information on American military operations caused a major political incident in March 2025. The Security Think Tank considers what can CISOs can learn from this potentially fatal error. By Russell Auld, PAC Published: 30 May 2025 In today’s constantly changing cyber landscape, answering the question “what does best practice now look like?” is far from simple. While emerging technologies and AI-driven security tools continue to make the headlines and become the topics of discussion, the real pivot point for modern security lies not just in the technological advancements but in context, people and process.  The recent Signal messaging platform incident in which a journalist was mistakenly added to a group chat, exposing sensitive information, serves as a timely reminder that even the most secure platform is vulnerable to human error. The platform wasn’t breached by malicious actors, or a zero-day exploit being utilised or the end-to-end encryption failing; the shortfall here was likely poorly defined acceptable use polices and controls alongside a lack of training and awareness. This incident, if nothing else, highlights a critical truth within cyber security – security tools are only as good as the environment, policies, and people operating them. While it’s tempting to focus on implementing more technical controls to prevent this from happening again, the reality is that many incidents result from a failure of process, governance, or awareness.  What does good security look like today? Some key areas include: Context over features, for example, whether Signal should have been used in the first place; There is no such thing as a silver bullet approach to protect your organisation; The importance of your team’s training and education; Reviewing and adapting continuously.  Security must be context-driven. Business leaders need to consider what their key area of concern is – reputational risk, state-sponsored surveillance, insider threats, or regulatory compliance. Each threat vector requires a different set of controls. For example, an organisation handling official-sensitive or classified data will require not just encryption, but assured platforms, robust access controls, identity validation, and clear auditability. Conversely, a commercial enterprise concerned about intellectual property leakage might strategically focus on user training, data loss prevention, and device control. Best practice isn’t picking the platform with the cheapest price tag or the most commonly used; it’s selecting a platform that supports the controls and policies required based on a deep understanding of your specific risks and use cases.   There is no one-size-fits-all solution for your organisation. The security product landscape is filled with vendors offering overlapping solutions that all claim to provide more protection than the other. And, although we know some potentially do offer better protection, features or functionality, even the best tool will fail if used incorrectly or implemented without a clear understanding of its limitations. Worse, organisations may gain a false sense of security by relying solely on a supplier’s claims. The priority must be to assess your organisation’s internal capability to manage and operate these tools effectively. Reassessing the threat landscape and taking advantage of the wealth of threat intelligence tools available, helps ensure you have the right skills, policies, and processes in place.  The Computer Weekly Security Think Tank on Signalgate Todd Thiemann, ESG: Signalgate: Learnings for CISOs securing enterprise data. Javvad Malik, KnowBe4: What CISOs can learn from Signalgate. Aditya Sood, Aryaka: Unspoken risk: Human factors undermine trusted platforms. Raihan Islam, defineXTEND: When leaders ignore cyber security rules, the whole system weakens. Elliot Wilkes, ACDS: Security vs. usability: Why rogue corporate comms are still an issue. Mike Gillespie and Ellie Hurst, Advent IM: Signalgate is a signal to revisit security onboarding and training. Best practice in 2025 means recognising that many security incidents stem from simple human mistakes, misaddressed emails, poor password hygiene, or even sharing access with the wrong person. Investing in continual staff education, security awareness, and skills gap analysis is essential to risk reduction.   This doesn’t mean showing an annual 10-minute cyber awareness video; you need to identify what will motivate your people and run security campaigns that capture their attention and change behaviour. For example you could consider using engaging nudges such as mandatory phishing alerts on laptops, interactive lock screen campaigns, and quizzes on key policies such as acceptable use and password complexity. Incorporate gamification elements, for example rewards for completing quizzes, and timely reminders to reinforce security best practices and fostering a culture of vigilance.  These campaigns should be a mixture of communications that engage people coupled with training which is seen as relevant by the workforce, as well as meeting role specific needs. Your developers need to understand secure coding practices, while those in front line operations may need training in how to detect phishing or social engineering attacks. In doing so this helps to create a better security culture within the organisation and enhance your overall security posture.  Finally, what’s considered “best practice” today may be outdated by tomorrow. Threats are constantly evolving, regulations change, and your own business operations and strategy may shift. Adopting a cyber security lifecycle that encompasses people, process and technology, supported by business continuous improvement activities and a clear vision from senior stakeholders will be vital. Conducting regular security reviews, red-teaming, and reassessing governance and policies will help ensure that defences remain relevant and proportional to your organisation’s threats. Encryption, however, still matters. As do SSO, MFA, secure coding practises, and access controls. But the real cornerstone of best practice in today’s cyber world is understanding why you need them, and how they’ll be used in practice. Securing your organisation is no longer just about picking the best platform, it's about creating a holistic view that incorporates people, process, and technology. And that may be the most secure approach, after all.  Russell Auld is digital trust and cyber security expert at PA Consulting In The Current Issue: UK government outlines plan to surveil migrants with eVisa data Why we must reform the Computer Misuse Act: A cyber pro speaks out Download Current Issue NTT IOWN all-photonics ‘saves Princess Miku’ from dragon – CW Developer Network FinOps Foundation lays down 2025 Framework for Cloud+ cost control – Open Source Insider View All Blogs #rethinking #secure #comms #are #encrypted
    WWW.COMPUTERWEEKLY.COM
    Rethinking secure comms: Are encrypted platforms still enough?
    Maksim Kabakou - Fotolia Opinion Rethinking secure comms: Are encrypted platforms still enough? A leak of information on American military operations caused a major political incident in March 2025. The Security Think Tank considers what can CISOs can learn from this potentially fatal error. By Russell Auld, PAC Published: 30 May 2025 In today’s constantly changing cyber landscape, answering the question “what does best practice now look like?” is far from simple. While emerging technologies and AI-driven security tools continue to make the headlines and become the topics of discussion, the real pivot point for modern security lies not just in the technological advancements but in context, people and process.  The recent Signal messaging platform incident in which a journalist was mistakenly added to a group chat, exposing sensitive information, serves as a timely reminder that even the most secure platform is vulnerable to human error. The platform wasn’t breached by malicious actors, or a zero-day exploit being utilised or the end-to-end encryption failing; the shortfall here was likely poorly defined acceptable use polices and controls alongside a lack of training and awareness. This incident, if nothing else, highlights a critical truth within cyber security – security tools are only as good as the environment, policies, and people operating them. While it’s tempting to focus on implementing more technical controls to prevent this from happening again, the reality is that many incidents result from a failure of process, governance, or awareness.  What does good security look like today? Some key areas include: Context over features, for example, whether Signal should have been used in the first place; There is no such thing as a silver bullet approach to protect your organisation; The importance of your team’s training and education; Reviewing and adapting continuously.  Security must be context-driven. Business leaders need to consider what their key area of concern is – reputational risk, state-sponsored surveillance, insider threats, or regulatory compliance. Each threat vector requires a different set of controls. For example, an organisation handling official-sensitive or classified data will require not just encryption, but assured platforms, robust access controls, identity validation, and clear auditability. Conversely, a commercial enterprise concerned about intellectual property leakage might strategically focus on user training, data loss prevention, and device control. Best practice isn’t picking the platform with the cheapest price tag or the most commonly used; it’s selecting a platform that supports the controls and policies required based on a deep understanding of your specific risks and use cases.   There is no one-size-fits-all solution for your organisation. The security product landscape is filled with vendors offering overlapping solutions that all claim to provide more protection than the other. And, although we know some potentially do offer better protection, features or functionality, even the best tool will fail if used incorrectly or implemented without a clear understanding of its limitations. Worse, organisations may gain a false sense of security by relying solely on a supplier’s claims. The priority must be to assess your organisation’s internal capability to manage and operate these tools effectively. Reassessing the threat landscape and taking advantage of the wealth of threat intelligence tools available, helps ensure you have the right skills, policies, and processes in place.  The Computer Weekly Security Think Tank on Signalgate Todd Thiemann, ESG: Signalgate: Learnings for CISOs securing enterprise data. Javvad Malik, KnowBe4: What CISOs can learn from Signalgate. Aditya Sood, Aryaka: Unspoken risk: Human factors undermine trusted platforms. Raihan Islam, defineXTEND: When leaders ignore cyber security rules, the whole system weakens. Elliot Wilkes, ACDS: Security vs. usability: Why rogue corporate comms are still an issue. Mike Gillespie and Ellie Hurst, Advent IM: Signalgate is a signal to revisit security onboarding and training. Best practice in 2025 means recognising that many security incidents stem from simple human mistakes, misaddressed emails, poor password hygiene, or even sharing access with the wrong person. Investing in continual staff education, security awareness, and skills gap analysis is essential to risk reduction.   This doesn’t mean showing an annual 10-minute cyber awareness video; you need to identify what will motivate your people and run security campaigns that capture their attention and change behaviour. For example you could consider using engaging nudges such as mandatory phishing alerts on laptops, interactive lock screen campaigns, and quizzes on key policies such as acceptable use and password complexity. Incorporate gamification elements, for example rewards for completing quizzes, and timely reminders to reinforce security best practices and fostering a culture of vigilance.  These campaigns should be a mixture of communications that engage people coupled with training which is seen as relevant by the workforce, as well as meeting role specific needs. Your developers need to understand secure coding practices, while those in front line operations may need training in how to detect phishing or social engineering attacks. In doing so this helps to create a better security culture within the organisation and enhance your overall security posture.  Finally, what’s considered “best practice” today may be outdated by tomorrow. Threats are constantly evolving, regulations change, and your own business operations and strategy may shift. Adopting a cyber security lifecycle that encompasses people, process and technology, supported by business continuous improvement activities and a clear vision from senior stakeholders will be vital. Conducting regular security reviews, red-teaming, and reassessing governance and policies will help ensure that defences remain relevant and proportional to your organisation’s threats. Encryption, however, still matters. As do SSO, MFA, secure coding practises, and access controls. But the real cornerstone of best practice in today’s cyber world is understanding why you need them, and how they’ll be used in practice. Securing your organisation is no longer just about picking the best platform, it's about creating a holistic view that incorporates people, process, and technology. And that may be the most secure approach, after all.  Russell Auld is digital trust and cyber security expert at PA Consulting In The Current Issue: UK government outlines plan to surveil migrants with eVisa data Why we must reform the Computer Misuse Act: A cyber pro speaks out Download Current Issue NTT IOWN all-photonics ‘saves Princess Miku’ from dragon – CW Developer Network FinOps Foundation lays down 2025 Framework for Cloud+ cost control – Open Source Insider View All Blogs
    0 Commentarii 0 Distribuiri
  • Fueling seamless AI at scale

    From large language modelsto reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous.

    First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learningallow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value.

    Silicon’s mid-life crisis

    AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau.

    For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age.

    As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing unitshave managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing unitsand other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently.

    But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units. AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks.

    Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics.

    Understanding models and paradigms

    The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute.

    Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance. 

    New system architectures, like retrieval-augmented generation, have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts.

    The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case.

    Orchestrating AI

    As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices.

    With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users.

    Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs.

    Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments.

    More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks.

    Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning. 

    How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere.

    Learn more about Arm’s approach to enabling AI everywhere.

    This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

    This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    #fueling #seamless #scale
    Fueling seamless AI at scale
    From large language modelsto reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous. First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learningallow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value. Silicon’s mid-life crisis AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau. For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age. As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing unitshave managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing unitsand other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently. But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units. AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks. Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics. Understanding models and paradigms The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute. Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance.  New system architectures, like retrieval-augmented generation, have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts. The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case. Orchestrating AI As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices. With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users. Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs. Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments. More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks. Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning.  How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere. Learn more about Arm’s approach to enabling AI everywhere. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review. #fueling #seamless #scale
    WWW.TECHNOLOGYREVIEW.COM
    Fueling seamless AI at scale
    From large language models (LLMs) to reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous. First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learning (ML) allow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value. Silicon’s mid-life crisis AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau. For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age. As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing units (CPUs) have managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing units (GPUs) and other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently. But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units (TPUs). AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks. Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics. Understanding models and paradigms The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute. Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance.  New system architectures, like retrieval-augmented generation (RAG), have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts. The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case. Orchestrating AI As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices. With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users. Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs. Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments. More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks. Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning.  How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere. Learn more about Arm’s approach to enabling AI everywhere. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    0 Commentarii 0 Distribuiri
  • A photon caught in two places at once could destroy the multiverse

    Is it time to say goodbye to the multiverse?SCIENCE PHOTO LIBRARY/Getty Images
    An advanced version of the famous double-slit experiment has directly measured a single photon in two places at once – or at least, that’s the claim made by a team of physicists who say these results could destroy the concept of a multiverse. This interpretation remains highly contested, however, with other physicists arguing that the experiment can’t really tell us anything new about the nature of reality.
    The double-slit experiment, first performed in 1801, has played a key role in the development of quantum mechanics. It shows…
    #photon #caught #two #places #once
    A photon caught in two places at once could destroy the multiverse
    Is it time to say goodbye to the multiverse?SCIENCE PHOTO LIBRARY/Getty Images An advanced version of the famous double-slit experiment has directly measured a single photon in two places at once – or at least, that’s the claim made by a team of physicists who say these results could destroy the concept of a multiverse. This interpretation remains highly contested, however, with other physicists arguing that the experiment can’t really tell us anything new about the nature of reality. The double-slit experiment, first performed in 1801, has played a key role in the development of quantum mechanics. It shows… #photon #caught #two #places #once
    WWW.NEWSCIENTIST.COM
    A photon caught in two places at once could destroy the multiverse
    Is it time to say goodbye to the multiverse?SCIENCE PHOTO LIBRARY/Getty Images An advanced version of the famous double-slit experiment has directly measured a single photon in two places at once – or at least, that’s the claim made by a team of physicists who say these results could destroy the concept of a multiverse. This interpretation remains highly contested, however, with other physicists arguing that the experiment can’t really tell us anything new about the nature of reality. The double-slit experiment, first performed in 1801, has played a key role in the development of quantum mechanics. It shows…
    0 Commentarii 0 Distribuiri
CGShares https://cgshares.com