• The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)

    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2.
    With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature.
    Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series?
    Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show.
    Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career. 
    Photograph by Liane Hentscher/HBO
    How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season?
    Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season.
    The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season?
    Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs.
    Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required.

    The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season?
    Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming.
    Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle. 
    What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic?
    Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences.
    Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover.
    Photograph by Liane Hentscher/HBO
    The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did?
    Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs.
    Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence. 

    Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects?
    Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours.
    Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot.
    Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation.
    The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles.
    We had over a hundred shots in episode 2 that required CG Infected horde.
    Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts.

    The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment?
    Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves. 
    The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters?
    Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence.
    During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it!
    When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule.

    Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force. 
    During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain. 

    Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance?
    Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves. 

    Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city?
    Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty.
    Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic?
    Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots.
    Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp
    it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston.
    Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game. 

    The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment?
    Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings.
    Photograph by Liane Hentscher/HBO
    The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects?
    Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal.
    When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement.
    Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth.
    Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint?
    Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season!
    Photograph by Liane Hentscher/HBO
    Looking back on the project, what aspects of the visual effects are you most proud of?
    Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable.
    Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light. 
    How long have you worked on this show?
    Alex Wang // I’ve been on this season for nearly two years.
    Fiona Campbell Westgate // A little over one year; I joined the show in April 2024.
    What’s the VFX shots count?
    Alex Wang // We had just over 2,500 shots this Season.
    Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots.
    What is your next project?
    Fiona Campbell Westgate // Stay tuned…
    A big thanks for your time.
    WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website.
    © Vincent Frei – The Art of VFX – 2025
    #last #season #alex #wang #production
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025 #last #season #alex #wang #production
    WWW.ARTOFVFX.COM
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Mace (DFX Supervisor) led a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025
    Like
    Love
    Wow
    Sad
    Angry
    192
    0 Commentarii 0 Distribuiri
  • CD Projekt Red tried to redesign Geralt's face once, and it backfired horribly

    Geralt, the hero of The Witcher series of games, nearly had a considerably different face. He actually did, briefly, but the game's community disliked it so much CD Projekt Red panicked and changed it back.
    The problem? Anatomical correctness. The community didn't think Geralt was alien-looking or ugly enough.
    The year was 2010 and CD Projekt Red was ready to debut its brand new Witcher game, The Witcher 2: Assassins of Kings, to the world. A couple of leaked videos preceded the formal announcement but when a clutch of screenshots was eventually released, it debuted a different looking Geralt to the one people were used to from The Witcher 1.
    Whereas Geralt had previously had the proportions of a triangle, roughly, which angled to a point on his nose and didn't seem to involve a chin of any kind, he now had much easier-on-the-eye proportions and looked like an actual person. He was even, dare I say it, handsome. It simply wouldn't do.
    Some of this was to be expected. The transition from Witcher 1 to Witcher 2 included a transition for the game's engine, moving from BioWare's Aurora engine, which once powered Neverwinter Nights, to CD Projekt Red's internally made engine Redengine. A facial design that worked well in one engine wouldn't necessarily work in both.

    Geralt fights a baddie in The Witcher 1. | Image credit: CD Projekt Red

    "The problem was that The Witcher 1 was heavily stylised," CD Projekt Red art director Pawel Mielniczuk explained to me. "From an art point of view, it was a much simpler visual fidelity than was in The Witcher 2 and Witcher 3. It was based on this Aurora engine from Neverwinter Nights - low poly, you know - so the character looks great there but the face of Geralt in The Witcher 1 wasn't very anatomically correct. It was making a good impression.
    "When we got to The Witcher 2, we had a better engine - larger budgets for polygons, more artists to sculpt nice faces, and we actually got better at making characters, already being a studio that released one game. And Geralt'sface just did not match the style of the rest of the characters," he said. "It was not realistic human proportions."
    The solution was clear: redesign Geralt's face. "Let's make Geralt from scratch - nobody will notice that," Mielniczuk said, and laughed at the memory. "So we made it at the very beginning of The Witcher 2 production and we released it with this first bunch of screenshots to see what the response was, and the response was horrible! Our community just smashed us on the forums - there were almost riots there."

    Geralt's redesigned face, unveiled in the debut screenshots released for The Witcher 2. | Image credit: CD Projekt Red

    Sadly I can't find those riots on those company forums now; 15 years of chatter has buried it. But Mielniczuk told me the comments there were to the effect of: "True Geralt: he's supposed to be ugly and inhuman!" CD Projekt Red backtracked as a result of the backlash, and it would take a further two years of tinkering, and testing and re-evaluating, to get Geralt's look right for the game. "And was a hybrid of The Witcher 1 Geralt and a real human," Mielniczuk said.
    By the time The Witcher 3 development came around, in around 2011-2012, the opportunity once again presented itself to tinker with Geralt's face, but this time the studio resisted. "With The Witcher 3, we actually used exactly the same model from Witcher 2, added more polygons, updated textures, but we did not touch it," Mielniczuk said.

    Geralt as pictured at the beginning of The Witcher 2. | Image credit: Eurogamer / CD Projekt Red

    That's not to say Mielniczuk didn't want to alter Geralt's face for the third game. He was the lead character artist on The Witcher 3. He hand-sculpted both Ciri and Yennefer's face, and he could see glaring issues with Geralt's. "If you look at the profile of Geralt: he has this incredible profile but the tip of his nose is a completely straight line from his forehead, kind of Greek proportions, and it was not fitting his face, so we wanted to fix that. But we did not," he said. "We made a decision, 'Okay, that's Geralt, he's recognisable, people are loving our character. We pass. We cannot make this mistake once again.'"
    Which brings us around to The Witcher 4, which is now in full production and we know will include Geralt to some degree. The new game will also move the series to a new engine, Unreal Engine 5, so once again there's an opportunity for a Geralt-face redesign. Will CD Projekt Red take it?

    Even the box art changed quite considerably over the course of the game's development. | Image credit: CD Projekt Red

    "It's such a grounded character right now I would really not dare to touch it," Mielniczuk said. "And in general, it's a very successful character because his face is recognisable, probably also because of these features of inhuman proportions in the upper part of the body. So no, I wouldn't update anything, just textures, normal maps, adding more details on the face, make it realistic through the surfaces, but not through the anatomy and proportions."
    But there is one thing that might tempt Mielniczuk to update Geralt's face, or rather one person, and that's Henry Cavill, the former star of The Witcher Netflix TV show. Mielniczuk is a big fan of his. "Henry was just perfect," he said. Then he added, laughing: "If I would do something to the face, I would be easily convinced to scan Henry and put him in The Witcher 4!"
    I spoke to Pawel Mielniczuk as part of a series of interviews looking back on The Witcher 3, a decade on, through the eyes of the people who made it. You can find that full piece on Eurogamer now.
    #projekt #red #tried #redesign #geralt039s
    CD Projekt Red tried to redesign Geralt's face once, and it backfired horribly
    Geralt, the hero of The Witcher series of games, nearly had a considerably different face. He actually did, briefly, but the game's community disliked it so much CD Projekt Red panicked and changed it back. The problem? Anatomical correctness. The community didn't think Geralt was alien-looking or ugly enough. The year was 2010 and CD Projekt Red was ready to debut its brand new Witcher game, The Witcher 2: Assassins of Kings, to the world. A couple of leaked videos preceded the formal announcement but when a clutch of screenshots was eventually released, it debuted a different looking Geralt to the one people were used to from The Witcher 1. Whereas Geralt had previously had the proportions of a triangle, roughly, which angled to a point on his nose and didn't seem to involve a chin of any kind, he now had much easier-on-the-eye proportions and looked like an actual person. He was even, dare I say it, handsome. It simply wouldn't do. Some of this was to be expected. The transition from Witcher 1 to Witcher 2 included a transition for the game's engine, moving from BioWare's Aurora engine, which once powered Neverwinter Nights, to CD Projekt Red's internally made engine Redengine. A facial design that worked well in one engine wouldn't necessarily work in both. Geralt fights a baddie in The Witcher 1. | Image credit: CD Projekt Red "The problem was that The Witcher 1 was heavily stylised," CD Projekt Red art director Pawel Mielniczuk explained to me. "From an art point of view, it was a much simpler visual fidelity than was in The Witcher 2 and Witcher 3. It was based on this Aurora engine from Neverwinter Nights - low poly, you know - so the character looks great there but the face of Geralt in The Witcher 1 wasn't very anatomically correct. It was making a good impression. "When we got to The Witcher 2, we had a better engine - larger budgets for polygons, more artists to sculpt nice faces, and we actually got better at making characters, already being a studio that released one game. And Geralt'sface just did not match the style of the rest of the characters," he said. "It was not realistic human proportions." The solution was clear: redesign Geralt's face. "Let's make Geralt from scratch - nobody will notice that," Mielniczuk said, and laughed at the memory. "So we made it at the very beginning of The Witcher 2 production and we released it with this first bunch of screenshots to see what the response was, and the response was horrible! Our community just smashed us on the forums - there were almost riots there." Geralt's redesigned face, unveiled in the debut screenshots released for The Witcher 2. | Image credit: CD Projekt Red Sadly I can't find those riots on those company forums now; 15 years of chatter has buried it. But Mielniczuk told me the comments there were to the effect of: "True Geralt: he's supposed to be ugly and inhuman!" CD Projekt Red backtracked as a result of the backlash, and it would take a further two years of tinkering, and testing and re-evaluating, to get Geralt's look right for the game. "And was a hybrid of The Witcher 1 Geralt and a real human," Mielniczuk said. By the time The Witcher 3 development came around, in around 2011-2012, the opportunity once again presented itself to tinker with Geralt's face, but this time the studio resisted. "With The Witcher 3, we actually used exactly the same model from Witcher 2, added more polygons, updated textures, but we did not touch it," Mielniczuk said. Geralt as pictured at the beginning of The Witcher 2. | Image credit: Eurogamer / CD Projekt Red That's not to say Mielniczuk didn't want to alter Geralt's face for the third game. He was the lead character artist on The Witcher 3. He hand-sculpted both Ciri and Yennefer's face, and he could see glaring issues with Geralt's. "If you look at the profile of Geralt: he has this incredible profile but the tip of his nose is a completely straight line from his forehead, kind of Greek proportions, and it was not fitting his face, so we wanted to fix that. But we did not," he said. "We made a decision, 'Okay, that's Geralt, he's recognisable, people are loving our character. We pass. We cannot make this mistake once again.'" Which brings us around to The Witcher 4, which is now in full production and we know will include Geralt to some degree. The new game will also move the series to a new engine, Unreal Engine 5, so once again there's an opportunity for a Geralt-face redesign. Will CD Projekt Red take it? Even the box art changed quite considerably over the course of the game's development. | Image credit: CD Projekt Red "It's such a grounded character right now I would really not dare to touch it," Mielniczuk said. "And in general, it's a very successful character because his face is recognisable, probably also because of these features of inhuman proportions in the upper part of the body. So no, I wouldn't update anything, just textures, normal maps, adding more details on the face, make it realistic through the surfaces, but not through the anatomy and proportions." But there is one thing that might tempt Mielniczuk to update Geralt's face, or rather one person, and that's Henry Cavill, the former star of The Witcher Netflix TV show. Mielniczuk is a big fan of his. "Henry was just perfect," he said. Then he added, laughing: "If I would do something to the face, I would be easily convinced to scan Henry and put him in The Witcher 4!" I spoke to Pawel Mielniczuk as part of a series of interviews looking back on The Witcher 3, a decade on, through the eyes of the people who made it. You can find that full piece on Eurogamer now. #projekt #red #tried #redesign #geralt039s
    WWW.EUROGAMER.NET
    CD Projekt Red tried to redesign Geralt's face once, and it backfired horribly
    Geralt, the hero of The Witcher series of games, nearly had a considerably different face. He actually did, briefly, but the game's community disliked it so much CD Projekt Red panicked and changed it back. The problem? Anatomical correctness. The community didn't think Geralt was alien-looking or ugly enough. The year was 2010 and CD Projekt Red was ready to debut its brand new Witcher game, The Witcher 2: Assassins of Kings, to the world. A couple of leaked videos preceded the formal announcement but when a clutch of screenshots was eventually released, it debuted a different looking Geralt to the one people were used to from The Witcher 1. Whereas Geralt had previously had the proportions of a triangle, roughly, which angled to a point on his nose and didn't seem to involve a chin of any kind, he now had much easier-on-the-eye proportions and looked like an actual person. He was even, dare I say it, handsome. It simply wouldn't do. Some of this was to be expected. The transition from Witcher 1 to Witcher 2 included a transition for the game's engine, moving from BioWare's Aurora engine, which once powered Neverwinter Nights, to CD Projekt Red's internally made engine Redengine. A facial design that worked well in one engine wouldn't necessarily work in both. Geralt fights a baddie in The Witcher 1. | Image credit: CD Projekt Red "The problem was that The Witcher 1 was heavily stylised," CD Projekt Red art director Pawel Mielniczuk explained to me. "From an art point of view, it was a much simpler visual fidelity than was in The Witcher 2 and Witcher 3. It was based on this Aurora engine from Neverwinter Nights - low poly, you know - so the character looks great there but the face of Geralt in The Witcher 1 wasn't very anatomically correct. It was making a good impression. "When we got to The Witcher 2, we had a better engine - larger budgets for polygons, more artists to sculpt nice faces, and we actually got better at making characters, already being a studio that released one game. And Geralt's [existing] face just did not match the style of the rest of the characters," he said. "It was not realistic human proportions." The solution was clear: redesign Geralt's face. "Let's make Geralt from scratch - nobody will notice that," Mielniczuk said, and laughed at the memory. "So we made it at the very beginning of The Witcher 2 production and we released it with this first bunch of screenshots to see what the response was, and the response was horrible! Our community just smashed us on the forums - there were almost riots there." Geralt's redesigned face, unveiled in the debut screenshots released for The Witcher 2. | Image credit: CD Projekt Red Sadly I can't find those riots on those company forums now; 15 years of chatter has buried it. But Mielniczuk told me the comments there were to the effect of: "True Geralt: he's supposed to be ugly and inhuman!" CD Projekt Red backtracked as a result of the backlash, and it would take a further two years of tinkering, and testing and re-evaluating, to get Geralt's look right for the game. "And was a hybrid of The Witcher 1 Geralt and a real human," Mielniczuk said. By the time The Witcher 3 development came around, in around 2011-2012, the opportunity once again presented itself to tinker with Geralt's face, but this time the studio resisted. "With The Witcher 3, we actually used exactly the same model from Witcher 2, added more polygons, updated textures, but we did not touch it," Mielniczuk said. Geralt as pictured at the beginning of The Witcher 2. | Image credit: Eurogamer / CD Projekt Red That's not to say Mielniczuk didn't want to alter Geralt's face for the third game. He was the lead character artist on The Witcher 3. He hand-sculpted both Ciri and Yennefer's face, and he could see glaring issues with Geralt's. "If you look at the profile of Geralt: he has this incredible profile but the tip of his nose is a completely straight line from his forehead, kind of Greek proportions, and it was not fitting his face, so we wanted to fix that. But we did not," he said. "We made a decision, 'Okay, that's Geralt, he's recognisable, people are loving our character. We pass. We cannot make this mistake once again.'" Which brings us around to The Witcher 4, which is now in full production and we know will include Geralt to some degree. The new game will also move the series to a new engine, Unreal Engine 5, so once again there's an opportunity for a Geralt-face redesign. Will CD Projekt Red take it? Even the box art changed quite considerably over the course of the game's development. | Image credit: CD Projekt Red "It's such a grounded character right now I would really not dare to touch it," Mielniczuk said. "And in general, it's a very successful character because his face is recognisable, probably also because of these features of inhuman proportions in the upper part of the body. So no, I wouldn't update anything, just textures, normal maps, adding more details on the face, make it realistic through the surfaces, but not through the anatomy and proportions." But there is one thing that might tempt Mielniczuk to update Geralt's face, or rather one person, and that's Henry Cavill, the former star of The Witcher Netflix TV show. Mielniczuk is a big fan of his. "Henry was just perfect," he said. Then he added, laughing: "If I would do something to the face, I would be easily convinced to scan Henry and put him in The Witcher 4!" I spoke to Pawel Mielniczuk as part of a series of interviews looking back on The Witcher 3, a decade on, through the eyes of the people who made it. You can find that full piece on Eurogamer now.
    0 Commentarii 0 Distribuiri
  • In Surreal Portraits, Rafael Silveira Tends to the Garden of Consciousness

    “Magnetic”, oil on canvas, 23.62 × 23.62 inches. All images courtesy of the artist and DCG Contemporary, shared with permission

    In Surreal Portraits, Rafael Silveira Tends to the Garden of Consciousness
    May 29, 2025
    Art
    Kate Mothes

    With scenic vistas for faces, blossoms for eyes, or nothing but coral above the shoulders, Rafael Silveira’s surreal portraits summon aspects of human consciousness that span the spectrum of the wonderful and the weird. The Brazilian artist describes his work as “a profound dive into the human mind,” merging flowers, landscapes, and uncanny hybrid features into visages that channel humor with a slightly sinister undertone.
    Silveira’s forthcoming solo exhibition, Agricultura Cósmica at DCG Contemporary, traverses “the fertile terrain of the subconscious,” the gallery says. “With a nod to pop surrealism and the uncanny, his work imagines the mind as a garden where thoughts are seeds and imagesthe wildflowers that sprout.”
    “PLEEESE”, oil on canvas, 23.62 × 23.62 inches
    Silveira works predominantly in oil, using panel or canvas as a surface and occasionally surrounding his works with ornate, hand-carved wooden frames. The sculptural details of the frames, like an anatomical heart in “Eyeconic Couple” or an all-seeing eye topping “A Crocância do Tempo” — “the crunchiness of time” in Portuguese — read like talismans.
    Many of Silveira’s compositions begin with a traditional head-and-shoulders portrait composition as a starting point, but instead of skin we see a distant horizon, like in “Magnetic,” or a figure’s head supplanted by a stalk of coral or a column of fire. Other pieces omit the human outline altogether in amusing arrangements of vivid flowers, which suggest wide eyes and addled expressions. While human forms shed their emotional autonomy as they converge with their surroundings, the flora in works like “OMG” and “PLEEESE” are a profusion of awe and desire.
    Agricultura Cósmica opens in London on June 12 and continues through July 10. The show runs concurrently alongside an exhibition titled Plural by embroidery artist Flavia Itiberê. See more on the artist’s website and Instagram.
    “Eyeconic Couple”, oil on panel and hand-carved frame, 15.75 × 35.43 inches
    “Inside Out”, oil on canvas, 35.4 x 31.5 inches
    “A Crocância do Tempo”, oil on panel and hand-carved frame, 35.4 x 31.5 inches
    “The Artifice of Eternity”, oil on canvas, 23.62 × 31.5 inches
    “OMG”, oil on canvas, 23.62 × 23.62 inches
    “Paixão Ardente”, oil on panel and hand-carved frame, 35.4 x 31.5 inches
    “The Roots of Reality”, oil on canvas, 35.4 x 31.5 inches
    Next article
    #surreal #portraits #rafael #silveira #tends
    In Surreal Portraits, Rafael Silveira Tends to the Garden of Consciousness
    “Magnetic”, oil on canvas, 23.62 × 23.62 inches. All images courtesy of the artist and DCG Contemporary, shared with permission In Surreal Portraits, Rafael Silveira Tends to the Garden of Consciousness May 29, 2025 Art Kate Mothes With scenic vistas for faces, blossoms for eyes, or nothing but coral above the shoulders, Rafael Silveira’s surreal portraits summon aspects of human consciousness that span the spectrum of the wonderful and the weird. The Brazilian artist describes his work as “a profound dive into the human mind,” merging flowers, landscapes, and uncanny hybrid features into visages that channel humor with a slightly sinister undertone. Silveira’s forthcoming solo exhibition, Agricultura Cósmica at DCG Contemporary, traverses “the fertile terrain of the subconscious,” the gallery says. “With a nod to pop surrealism and the uncanny, his work imagines the mind as a garden where thoughts are seeds and imagesthe wildflowers that sprout.” “PLEEESE”, oil on canvas, 23.62 × 23.62 inches Silveira works predominantly in oil, using panel or canvas as a surface and occasionally surrounding his works with ornate, hand-carved wooden frames. The sculptural details of the frames, like an anatomical heart in “Eyeconic Couple” or an all-seeing eye topping “A Crocância do Tempo” — “the crunchiness of time” in Portuguese — read like talismans. Many of Silveira’s compositions begin with a traditional head-and-shoulders portrait composition as a starting point, but instead of skin we see a distant horizon, like in “Magnetic,” or a figure’s head supplanted by a stalk of coral or a column of fire. Other pieces omit the human outline altogether in amusing arrangements of vivid flowers, which suggest wide eyes and addled expressions. While human forms shed their emotional autonomy as they converge with their surroundings, the flora in works like “OMG” and “PLEEESE” are a profusion of awe and desire. Agricultura Cósmica opens in London on June 12 and continues through July 10. The show runs concurrently alongside an exhibition titled Plural by embroidery artist Flavia Itiberê. See more on the artist’s website and Instagram. “Eyeconic Couple”, oil on panel and hand-carved frame, 15.75 × 35.43 inches “Inside Out”, oil on canvas, 35.4 x 31.5 inches “A Crocância do Tempo”, oil on panel and hand-carved frame, 35.4 x 31.5 inches “The Artifice of Eternity”, oil on canvas, 23.62 × 31.5 inches “OMG”, oil on canvas, 23.62 × 23.62 inches “Paixão Ardente”, oil on panel and hand-carved frame, 35.4 x 31.5 inches “The Roots of Reality”, oil on canvas, 35.4 x 31.5 inches Next article #surreal #portraits #rafael #silveira #tends
    WWW.THISISCOLOSSAL.COM
    In Surreal Portraits, Rafael Silveira Tends to the Garden of Consciousness
    “Magnetic” (2025), oil on canvas, 23.62 × 23.62 inches. All images courtesy of the artist and DCG Contemporary, shared with permission In Surreal Portraits, Rafael Silveira Tends to the Garden of Consciousness May 29, 2025 Art Kate Mothes With scenic vistas for faces, blossoms for eyes, or nothing but coral above the shoulders, Rafael Silveira’s surreal portraits summon aspects of human consciousness that span the spectrum of the wonderful and the weird. The Brazilian artist describes his work as “a profound dive into the human mind,” merging flowers, landscapes, and uncanny hybrid features into visages that channel humor with a slightly sinister undertone. Silveira’s forthcoming solo exhibition, Agricultura Cósmica at DCG Contemporary, traverses “the fertile terrain of the subconscious,” the gallery says. “With a nod to pop surrealism and the uncanny, his work imagines the mind as a garden where thoughts are seeds and images (are) the wildflowers that sprout.” “PLEEESE” (2025), oil on canvas, 23.62 × 23.62 inches Silveira works predominantly in oil, using panel or canvas as a surface and occasionally surrounding his works with ornate, hand-carved wooden frames. The sculptural details of the frames, like an anatomical heart in “Eyeconic Couple” or an all-seeing eye topping “A Crocância do Tempo” — “the crunchiness of time” in Portuguese — read like talismans. Many of Silveira’s compositions begin with a traditional head-and-shoulders portrait composition as a starting point, but instead of skin we see a distant horizon, like in “Magnetic,” or a figure’s head supplanted by a stalk of coral or a column of fire. Other pieces omit the human outline altogether in amusing arrangements of vivid flowers, which suggest wide eyes and addled expressions. While human forms shed their emotional autonomy as they converge with their surroundings, the flora in works like “OMG” and “PLEEESE” are a profusion of awe and desire. Agricultura Cósmica opens in London on June 12 and continues through July 10. The show runs concurrently alongside an exhibition titled Plural by embroidery artist Flavia Itiberê. See more on the artist’s website and Instagram. “Eyeconic Couple” (2025), oil on panel and hand-carved frame, 15.75 × 35.43 inches “Inside Out” (2025), oil on canvas, 35.4 x 31.5 inches “A Crocância do Tempo” (2025), oil on panel and hand-carved frame, 35.4 x 31.5 inches “The Artifice of Eternity” (2025), oil on canvas, 23.62 × 31.5 inches “OMG” (2025), oil on canvas, 23.62 × 23.62 inches “Paixão Ardente” (2025), oil on panel and hand-carved frame, 35.4 x 31.5 inches “The Roots of Reality” (2025), oil on canvas, 35.4 x 31.5 inches Next article
    2 Commentarii 0 Distribuiri
  • Octopus chair concept brings sustainable innovation and modern design together

    Most of the time for me, chairs are just for sitting. I look for the most comfortable one to park myself in. Or when I am desperate to sit because I’ve been standing for a long time, just about any chair would do. But lately we’ve seen chairs become much more than just a resting place. We’ve seen a lot of innovative designs that make them a conversation piece or a statement about one thing or another. And even in concept designs, there have been a lot of interesting chair designs, even if I sometimes think I probably wouldn’t sit on it.
    The Octopus armchair concept emerges as a testament to responsible creativity. This piece not only showcases modern aesthetics but also embodies a commitment to environmental consciousness. It addresses the pressing issue of plastic waste by transforming discarded materials into a functional and stylish object. Its complex form is cast from crushed plastic, resulting in a single, cohesive piece. This approach not only repurposes waste but also ensures that the chair can be reprocessed into new items if it becomes damaged or obsolete, promoting a circular lifecycle.
    Designer: Alex Rekhlitskyi

    Beyond its sustainable foundation, the Octopus armchair captivates with its design. The convex legs seamlessly transition into an anatomically inspired seat, featuring concentric rings that deepen towards the center. This design not only offers visual intrigue but also provides ergonomic comfort, creating an embracing effect that conforms to the human body. It gives off the look of a baby octopus frozen in time, hence the name. The renders show it off more as a sculpture piece rather than an actual chair that you can sit on. As to how comfortable it can be, we can only know once prototypes have been made and real people have tried to sit on it.

    By integrating sustainability with design, the Octopus armchair encourages consumers to reconsider their relationship with everyday objects, not simply as disposable commodities, but as items that can carry deeper purpose and responsibility. It invites us to question the origins, lifecycle, and afterlife of the products we use, nudging a shift from a throwaway culture toward one of conscious consumption.
    This armchair serves as a powerful reminder that functionality and environmental responsibility can not only coexist but enhance each other, elevating both form and purpose. Its design challenges the misconception that sustainable products must compromise on style or comfort, proving instead that eco-consciousness can be central to innovation.

    The post Octopus chair concept brings sustainable innovation and modern design together first appeared on Yanko Design.
    #octopus #chair #concept #brings #sustainable
    Octopus chair concept brings sustainable innovation and modern design together
    Most of the time for me, chairs are just for sitting. I look for the most comfortable one to park myself in. Or when I am desperate to sit because I’ve been standing for a long time, just about any chair would do. But lately we’ve seen chairs become much more than just a resting place. We’ve seen a lot of innovative designs that make them a conversation piece or a statement about one thing or another. And even in concept designs, there have been a lot of interesting chair designs, even if I sometimes think I probably wouldn’t sit on it. The Octopus armchair concept emerges as a testament to responsible creativity. This piece not only showcases modern aesthetics but also embodies a commitment to environmental consciousness. It addresses the pressing issue of plastic waste by transforming discarded materials into a functional and stylish object. Its complex form is cast from crushed plastic, resulting in a single, cohesive piece. This approach not only repurposes waste but also ensures that the chair can be reprocessed into new items if it becomes damaged or obsolete, promoting a circular lifecycle. Designer: Alex Rekhlitskyi Beyond its sustainable foundation, the Octopus armchair captivates with its design. The convex legs seamlessly transition into an anatomically inspired seat, featuring concentric rings that deepen towards the center. This design not only offers visual intrigue but also provides ergonomic comfort, creating an embracing effect that conforms to the human body. It gives off the look of a baby octopus frozen in time, hence the name. The renders show it off more as a sculpture piece rather than an actual chair that you can sit on. As to how comfortable it can be, we can only know once prototypes have been made and real people have tried to sit on it. By integrating sustainability with design, the Octopus armchair encourages consumers to reconsider their relationship with everyday objects, not simply as disposable commodities, but as items that can carry deeper purpose and responsibility. It invites us to question the origins, lifecycle, and afterlife of the products we use, nudging a shift from a throwaway culture toward one of conscious consumption. This armchair serves as a powerful reminder that functionality and environmental responsibility can not only coexist but enhance each other, elevating both form and purpose. Its design challenges the misconception that sustainable products must compromise on style or comfort, proving instead that eco-consciousness can be central to innovation. The post Octopus chair concept brings sustainable innovation and modern design together first appeared on Yanko Design. #octopus #chair #concept #brings #sustainable
    WWW.YANKODESIGN.COM
    Octopus chair concept brings sustainable innovation and modern design together
    Most of the time for me, chairs are just for sitting. I look for the most comfortable one to park myself in. Or when I am desperate to sit because I’ve been standing for a long time, just about any chair would do. But lately we’ve seen chairs become much more than just a resting place. We’ve seen a lot of innovative designs that make them a conversation piece or a statement about one thing or another. And even in concept designs, there have been a lot of interesting chair designs, even if I sometimes think I probably wouldn’t sit on it. The Octopus armchair concept emerges as a testament to responsible creativity. This piece not only showcases modern aesthetics but also embodies a commitment to environmental consciousness. It addresses the pressing issue of plastic waste by transforming discarded materials into a functional and stylish object. Its complex form is cast from crushed plastic, resulting in a single, cohesive piece. This approach not only repurposes waste but also ensures that the chair can be reprocessed into new items if it becomes damaged or obsolete, promoting a circular lifecycle. Designer: Alex Rekhlitskyi Beyond its sustainable foundation, the Octopus armchair captivates with its design. The convex legs seamlessly transition into an anatomically inspired seat, featuring concentric rings that deepen towards the center. This design not only offers visual intrigue but also provides ergonomic comfort, creating an embracing effect that conforms to the human body. It gives off the look of a baby octopus frozen in time, hence the name. The renders show it off more as a sculpture piece rather than an actual chair that you can sit on. As to how comfortable it can be, we can only know once prototypes have been made and real people have tried to sit on it. By integrating sustainability with design, the Octopus armchair encourages consumers to reconsider their relationship with everyday objects, not simply as disposable commodities, but as items that can carry deeper purpose and responsibility. It invites us to question the origins, lifecycle, and afterlife of the products we use, nudging a shift from a throwaway culture toward one of conscious consumption. This armchair serves as a powerful reminder that functionality and environmental responsibility can not only coexist but enhance each other, elevating both form and purpose. Its design challenges the misconception that sustainable products must compromise on style or comfort, proving instead that eco-consciousness can be central to innovation. The post Octopus chair concept brings sustainable innovation and modern design together first appeared on Yanko Design.
    0 Commentarii 0 Distribuiri
  • Announcing the Higher Ed XR Innovation Grant recipients

    The demand for extended realitytalent is increasing rapidly, opening countless new doors for the next generation of metaverse creators. To adequately prepare tomorrow’s real-time 3D workforce, educators and schools need to be teaching these desirable skill sets to their students today. In pursuit of this goal, Unity Social Impact and Meta Immersive Learning have partnered to increase access to AR/VR hardware, high-quality educational content, and other resources that will help educators create or enhance innovative XR programs.The Higher Ed XR Innovation Grant is one of the core components of this partnership, providing over million in awards to higher-education institutions leveraging real-time 3D and immersive technology to make advances in teaching and learning, XR creation, and workforce development.Grantees were selected based on their proposals’ attention to inclusion, impact, viability, and innovation. Special consideration was given to institutions and programs that cater to or design innovative educational content for underserved learners.Today, we’re thrilled to introduce the eight recipients of the Higher Ed XR Innovation Grant. A team of over 60 judges from Unity and Meta selected the winners from among 276 submissions.“Now, more than ever, we have a responsibility to equip young people with the skills necessary for future jobs – providing them with learning that translates to earning,” says Jessica Lindl, vice president of social impact at Unity. “I’m thrilled with the winners of the Higher Ed XR Innovation Grant and am confident that these institutions will continue to provide equitable access to education and workforce opportunities.”Read on to learn how these forward-thinking projects are increasing access to quality real-time 3D education.Arizona State University’s Center for Narrative and Emerging Mediain Los Angeles will open as a best-in-class teaching and research facility, focused on diversifying who can create and distribute narratives using emerging media technologies in the areas of arts, culture, and nonfiction.NEM will train and support storytellers, artists, journalists, entrepreneurs, and engineers who will build the stories, technologies, and policies of the future. This fall, ASU launched their flagship MA Narrative and Emerging Media program, a collaborative effort between the Herberger Institute for Design and the Arts and the Walter Cronkite School of Journalism and Mass Communication centered around the development of a creative practice and critical understanding of emerging storytelling and immersive content creation. Funding from the Higher Ed XR Innovation Grant will support student production, virtual production, staff training, and research.Country: United States of AmericaThe vision of California Community Colleges is to ensure students from all backgrounds succeed in reaching their education and career goals, with emphasis on improving families’ incomes and communities’ workforces. To achieve this, California Community Colleges aim to provide educational programs that highlight inclusivity, diversity, and equity while minimizing logistical and financial barriers to success.Cañada College will partner with various employers and California Community College districts to enhance XR apprenticeship programs, K–14 curriculum development, and XR job training programs designed for dislocated workers, workforce board clients, and underemployed individuals. Funds from the Higher Ed XR Innovation Grant will aid Cañada College in designing and sharing workforce readiness models for county education offices, community colleges and universities, and workforce training entities in California and throughout the U.S.Country: United States of AmericaThe Clarkson University Psychology Department plans to use their grant funding to develop a novel instructional tool that leverages both VR for accurate neuroanatomical renderings and modern pedagogical principlesto build an innovative and engaging neuroscience learning experience.By using this tool to enhance their psychology program’s neuroscience instruction and open-sourcing the tool for use at other institutions, Clarkson University hopes to positively impact psychology students – especially those from underrepresented backgrounds. And, since student training is integrated throughout the project, the development process will involve students from multiple departments, providing them with opportunities to work in VR, engage in usability testing, and learn about neuroscience.Country: United States of AmericaThe LEDat the Federal University of CearáDepartment of Architecture, Urbanism and Designaims to be a place for students and professors to explore new digital technologies and develop innovative solutions for real-world problems.The LED plans to use their Higher Ed XR Innovation Grant funding to outfit eight digital studios with hardware for running prototyping experiments in virtual environments. They’ll also acquire peripherals for interacting with AR and MR experiences, including projectors for SAR, Kinect sensors, and motion trackers, with the goal of exploring ways that XR technology can improve design education and project solutions. Finally, their team will drive development of LEDed, a free platform for sharing educational content and experiences in XR within the department’s community and beyond.Country: BrazilNorQuest College is Alberta’s largest community college, serving more than 21,000 students annually. Housed within the Research Office at NorQuest College, Autism CanTech!’s vision is to remove barriers that hinder meaningful and sustainable employment within the digital economy for individuals on the Autism spectrum.Through job-specific training in technical skills, employability skills, career coaching, and Work Integrated Learningopportunities, ACT! works to fill industry gaps. ACT! also offers participants additional support through new assistive technology which allows users, career coaches, and supervisors to manage tasks, schedule work-related activities, and live chat. Funds from the Higher Ed XR Innovation Grant will support the development of a road map and team to adapt educator resources and XR courses for a neurodiverse audience.Country: CanadaEthọ́s Lab is a Black-led, nonprofit innovation academy for teens based in Vancouver, British Columbia, and accessible from anywhere in the world. To build toward a more inclusive future, Ethọ́s Lab takes a holistic, community-based approach to teaching S.T.E.A.M. that is partnered and has a long-term vision. The organization provides pathways to applied learning, mentorship, and access to emerging tech through weekly collaborative workshops, creative projects, and events. As participants, youth develop core skills for post-secondary admissions, future careers, and being the leaders of innovation.Centre for Digital Mediawas established in 2007 through a ground-breaking partnership between the University of British Columbia, Simon Fraser University, British Columbia Institute of Technology, and Emily Carr University of Art + Design. Anchored by the flagship, multidisciplinary Master of Digital Media program, CDM is a mixed-use campus, home to Canada’s first Metastage studio as well as game studios and innovative startups in the healthcare and cloud-computing sectors.With the support of the Higher Ed XR Innovation Grant, Ethọ́s Lab and CDM aim to increase the representation of Black youth and girls in XR-based digital futures through development of an XR Media Lab program. The funding will enable the program to serve 300+ underrepresented youth and 190+ high school educators over five years.Country: CanadaUniversidad de los Andes was founded in 1948 as an autonomous and innovative institution pursuing pluralism, tolerance, and respect. It strives to raise consciousness about students’ social and civic responsibilities as well as their relationship to and stewardship of the environment.The XR Incubator Programis a two-year program focused on both workforce development and education innovation in XR. With Higher Ed XR Innovation Grant funding, Universidad de los Andes will launch three Massive, Open Online Coursesin Spanish to promote learning throughout Ibero-America. Funds will also help implement a week-long XR Camp offering Colombian educators access to a variety of XR technologies, and an XR Mobile Lab that will allow those educators to show XR technologies to the public at their own institutions.Country: ColombiaThe University of Johannesburgand the Swiss Distance University of Applied Sciencesaspire to design an innovative, immersive tool that addresses challenges faced by teachers in underrepresented communities. With Higher Ed XR Innovation Grant funding, their multinational team will develop and test a VR prototype for pre-service teachersin South Africa.The tool will allow future teachers to have authentic teaching experiences in a safe environment, aided by learning analytics that provide opportunities to reflect on their lesson delivery and prepare them for actual teaching. The University also intends the tool to help mitigate language barriers for students whose first language is not English. Broadly, the project will empower pre-service teachers to be agents in transforming science teaching, leveraging the potential of immersive technologies and preparing students from marginalized communities with 21st-century digital skills.Country: South AfricaOn behalf of Unity Social Impact and Meta Immersive Learning, congratulations to all of our grant recipients and thank you to everyone who applied for the Higher Ed XR Innovation Grant. Learn more about educator resources and tools for propelling real-time 3D in the classroom and Meta’s million investment to transform the way we learn through Meta Immersive Learning.
    #announcing #higher #innovation #grant #recipients
    Announcing the Higher Ed XR Innovation Grant recipients
    The demand for extended realitytalent is increasing rapidly, opening countless new doors for the next generation of metaverse creators. To adequately prepare tomorrow’s real-time 3D workforce, educators and schools need to be teaching these desirable skill sets to their students today. In pursuit of this goal, Unity Social Impact and Meta Immersive Learning have partnered to increase access to AR/VR hardware, high-quality educational content, and other resources that will help educators create or enhance innovative XR programs.The Higher Ed XR Innovation Grant is one of the core components of this partnership, providing over million in awards to higher-education institutions leveraging real-time 3D and immersive technology to make advances in teaching and learning, XR creation, and workforce development.Grantees were selected based on their proposals’ attention to inclusion, impact, viability, and innovation. Special consideration was given to institutions and programs that cater to or design innovative educational content for underserved learners.Today, we’re thrilled to introduce the eight recipients of the Higher Ed XR Innovation Grant. A team of over 60 judges from Unity and Meta selected the winners from among 276 submissions.“Now, more than ever, we have a responsibility to equip young people with the skills necessary for future jobs – providing them with learning that translates to earning,” says Jessica Lindl, vice president of social impact at Unity. “I’m thrilled with the winners of the Higher Ed XR Innovation Grant and am confident that these institutions will continue to provide equitable access to education and workforce opportunities.”Read on to learn how these forward-thinking projects are increasing access to quality real-time 3D education.Arizona State University’s Center for Narrative and Emerging Mediain Los Angeles will open as a best-in-class teaching and research facility, focused on diversifying who can create and distribute narratives using emerging media technologies in the areas of arts, culture, and nonfiction.NEM will train and support storytellers, artists, journalists, entrepreneurs, and engineers who will build the stories, technologies, and policies of the future. This fall, ASU launched their flagship MA Narrative and Emerging Media program, a collaborative effort between the Herberger Institute for Design and the Arts and the Walter Cronkite School of Journalism and Mass Communication centered around the development of a creative practice and critical understanding of emerging storytelling and immersive content creation. Funding from the Higher Ed XR Innovation Grant will support student production, virtual production, staff training, and research.Country: United States of AmericaThe vision of California Community Colleges is to ensure students from all backgrounds succeed in reaching their education and career goals, with emphasis on improving families’ incomes and communities’ workforces. To achieve this, California Community Colleges aim to provide educational programs that highlight inclusivity, diversity, and equity while minimizing logistical and financial barriers to success.Cañada College will partner with various employers and California Community College districts to enhance XR apprenticeship programs, K–14 curriculum development, and XR job training programs designed for dislocated workers, workforce board clients, and underemployed individuals. Funds from the Higher Ed XR Innovation Grant will aid Cañada College in designing and sharing workforce readiness models for county education offices, community colleges and universities, and workforce training entities in California and throughout the U.S.Country: United States of AmericaThe Clarkson University Psychology Department plans to use their grant funding to develop a novel instructional tool that leverages both VR for accurate neuroanatomical renderings and modern pedagogical principlesto build an innovative and engaging neuroscience learning experience.By using this tool to enhance their psychology program’s neuroscience instruction and open-sourcing the tool for use at other institutions, Clarkson University hopes to positively impact psychology students – especially those from underrepresented backgrounds. And, since student training is integrated throughout the project, the development process will involve students from multiple departments, providing them with opportunities to work in VR, engage in usability testing, and learn about neuroscience.Country: United States of AmericaThe LEDat the Federal University of CearáDepartment of Architecture, Urbanism and Designaims to be a place for students and professors to explore new digital technologies and develop innovative solutions for real-world problems.The LED plans to use their Higher Ed XR Innovation Grant funding to outfit eight digital studios with hardware for running prototyping experiments in virtual environments. They’ll also acquire peripherals for interacting with AR and MR experiences, including projectors for SAR, Kinect sensors, and motion trackers, with the goal of exploring ways that XR technology can improve design education and project solutions. Finally, their team will drive development of LEDed, a free platform for sharing educational content and experiences in XR within the department’s community and beyond.Country: BrazilNorQuest College is Alberta’s largest community college, serving more than 21,000 students annually. Housed within the Research Office at NorQuest College, Autism CanTech!’s vision is to remove barriers that hinder meaningful and sustainable employment within the digital economy for individuals on the Autism spectrum.Through job-specific training in technical skills, employability skills, career coaching, and Work Integrated Learningopportunities, ACT! works to fill industry gaps. ACT! also offers participants additional support through new assistive technology which allows users, career coaches, and supervisors to manage tasks, schedule work-related activities, and live chat. Funds from the Higher Ed XR Innovation Grant will support the development of a road map and team to adapt educator resources and XR courses for a neurodiverse audience.Country: CanadaEthọ́s Lab is a Black-led, nonprofit innovation academy for teens based in Vancouver, British Columbia, and accessible from anywhere in the world. To build toward a more inclusive future, Ethọ́s Lab takes a holistic, community-based approach to teaching S.T.E.A.M. that is partnered and has a long-term vision. The organization provides pathways to applied learning, mentorship, and access to emerging tech through weekly collaborative workshops, creative projects, and events. As participants, youth develop core skills for post-secondary admissions, future careers, and being the leaders of innovation.Centre for Digital Mediawas established in 2007 through a ground-breaking partnership between the University of British Columbia, Simon Fraser University, British Columbia Institute of Technology, and Emily Carr University of Art + Design. Anchored by the flagship, multidisciplinary Master of Digital Media program, CDM is a mixed-use campus, home to Canada’s first Metastage studio as well as game studios and innovative startups in the healthcare and cloud-computing sectors.With the support of the Higher Ed XR Innovation Grant, Ethọ́s Lab and CDM aim to increase the representation of Black youth and girls in XR-based digital futures through development of an XR Media Lab program. The funding will enable the program to serve 300+ underrepresented youth and 190+ high school educators over five years.Country: CanadaUniversidad de los Andes was founded in 1948 as an autonomous and innovative institution pursuing pluralism, tolerance, and respect. It strives to raise consciousness about students’ social and civic responsibilities as well as their relationship to and stewardship of the environment.The XR Incubator Programis a two-year program focused on both workforce development and education innovation in XR. With Higher Ed XR Innovation Grant funding, Universidad de los Andes will launch three Massive, Open Online Coursesin Spanish to promote learning throughout Ibero-America. Funds will also help implement a week-long XR Camp offering Colombian educators access to a variety of XR technologies, and an XR Mobile Lab that will allow those educators to show XR technologies to the public at their own institutions.Country: ColombiaThe University of Johannesburgand the Swiss Distance University of Applied Sciencesaspire to design an innovative, immersive tool that addresses challenges faced by teachers in underrepresented communities. With Higher Ed XR Innovation Grant funding, their multinational team will develop and test a VR prototype for pre-service teachersin South Africa.The tool will allow future teachers to have authentic teaching experiences in a safe environment, aided by learning analytics that provide opportunities to reflect on their lesson delivery and prepare them for actual teaching. The University also intends the tool to help mitigate language barriers for students whose first language is not English. Broadly, the project will empower pre-service teachers to be agents in transforming science teaching, leveraging the potential of immersive technologies and preparing students from marginalized communities with 21st-century digital skills.Country: South AfricaOn behalf of Unity Social Impact and Meta Immersive Learning, congratulations to all of our grant recipients and thank you to everyone who applied for the Higher Ed XR Innovation Grant. Learn more about educator resources and tools for propelling real-time 3D in the classroom and Meta’s million investment to transform the way we learn through Meta Immersive Learning. #announcing #higher #innovation #grant #recipients
    UNITY.COM
    Announcing the Higher Ed XR Innovation Grant recipients
    The demand for extended reality (XR) talent is increasing rapidly, opening countless new doors for the next generation of metaverse creators. To adequately prepare tomorrow’s real-time 3D workforce, educators and schools need to be teaching these desirable skill sets to their students today. In pursuit of this goal, Unity Social Impact and Meta Immersive Learning have partnered to increase access to AR/VR hardware, high-quality educational content, and other resources that will help educators create or enhance innovative XR programs.The Higher Ed XR Innovation Grant is one of the core components of this partnership, providing over $1 million in awards to higher-education institutions leveraging real-time 3D and immersive technology to make advances in teaching and learning, XR creation, and workforce development.Grantees were selected based on their proposals’ attention to inclusion, impact, viability, and innovation. Special consideration was given to institutions and programs that cater to or design innovative educational content for underserved learners.Today, we’re thrilled to introduce the eight recipients of the Higher Ed XR Innovation Grant. A team of over 60 judges from Unity and Meta selected the winners from among 276 submissions.“Now, more than ever, we have a responsibility to equip young people with the skills necessary for future jobs – providing them with learning that translates to earning,” says Jessica Lindl, vice president of social impact at Unity. “I’m thrilled with the winners of the Higher Ed XR Innovation Grant and am confident that these institutions will continue to provide equitable access to education and workforce opportunities.”Read on to learn how these forward-thinking projects are increasing access to quality real-time 3D education.Arizona State University’s Center for Narrative and Emerging Media (NEM) in Los Angeles will open as a best-in-class teaching and research facility, focused on diversifying who can create and distribute narratives using emerging media technologies in the areas of arts, culture, and nonfiction.NEM will train and support storytellers, artists, journalists, entrepreneurs, and engineers who will build the stories, technologies, and policies of the future. This fall, ASU launched their flagship MA Narrative and Emerging Media program, a collaborative effort between the Herberger Institute for Design and the Arts and the Walter Cronkite School of Journalism and Mass Communication centered around the development of a creative practice and critical understanding of emerging storytelling and immersive content creation. Funding from the Higher Ed XR Innovation Grant will support student production, virtual production, staff training, and research.Country: United States of AmericaThe vision of California Community Colleges is to ensure students from all backgrounds succeed in reaching their education and career goals, with emphasis on improving families’ incomes and communities’ workforces. To achieve this, California Community Colleges aim to provide educational programs that highlight inclusivity, diversity, and equity while minimizing logistical and financial barriers to success.Cañada College will partner with various employers and California Community College districts to enhance XR apprenticeship programs, K–14 curriculum development, and XR job training programs designed for dislocated workers, workforce board clients, and underemployed individuals. Funds from the Higher Ed XR Innovation Grant will aid Cañada College in designing and sharing workforce readiness models for county education offices, community colleges and universities, and workforce training entities in California and throughout the U.S.Country: United States of AmericaThe Clarkson University Psychology Department plans to use their grant funding to develop a novel instructional tool that leverages both VR for accurate neuroanatomical renderings and modern pedagogical principles (such as social interaction and embodiment) to build an innovative and engaging neuroscience learning experience.By using this tool to enhance their psychology program’s neuroscience instruction and open-sourcing the tool for use at other institutions, Clarkson University hopes to positively impact psychology students – especially those from underrepresented backgrounds. And, since student training is integrated throughout the project, the development process will involve students from multiple departments, providing them with opportunities to work in VR, engage in usability testing, and learn about neuroscience.Country: United States of AmericaThe LED (Digital Experience Lab) at the Federal University of Ceará (UFC) Department of Architecture, Urbanism and Design (DAUD) aims to be a place for students and professors to explore new digital technologies and develop innovative solutions for real-world problems.The LED plans to use their Higher Ed XR Innovation Grant funding to outfit eight digital studios with hardware for running prototyping experiments in virtual environments. They’ll also acquire peripherals for interacting with AR and MR experiences, including projectors for SAR (spatial AR), Kinect sensors, and motion trackers, with the goal of exploring ways that XR technology can improve design education and project solutions. Finally, their team will drive development of LEDed, a free platform for sharing educational content and experiences in XR within the department’s community and beyond.Country: BrazilNorQuest College is Alberta’s largest community college, serving more than 21,000 students annually. Housed within the Research Office at NorQuest College, Autism CanTech! (ACT!)’s vision is to remove barriers that hinder meaningful and sustainable employment within the digital economy for individuals on the Autism spectrum.Through job-specific training in technical skills, employability skills, career coaching, and Work Integrated Learning (WIL) opportunities, ACT! works to fill industry gaps. ACT! also offers participants additional support through new assistive technology which allows users, career coaches, and supervisors to manage tasks, schedule work-related activities, and live chat. Funds from the Higher Ed XR Innovation Grant will support the development of a road map and team to adapt educator resources and XR courses for a neurodiverse audience.Country: CanadaEthọ́s Lab is a Black-led, nonprofit innovation academy for teens based in Vancouver, British Columbia, and accessible from anywhere in the world. To build toward a more inclusive future, Ethọ́s Lab takes a holistic, community-based approach to teaching S.T.E.A.M. that is partnered and has a long-term vision. The organization provides pathways to applied learning, mentorship, and access to emerging tech through weekly collaborative workshops, creative projects, and events. As participants, youth develop core skills for post-secondary admissions, future careers, and being the leaders of innovation.Centre for Digital Media (CDM) was established in 2007 through a ground-breaking partnership between the University of British Columbia, Simon Fraser University, British Columbia Institute of Technology, and Emily Carr University of Art + Design. Anchored by the flagship, multidisciplinary Master of Digital Media program, CDM is a mixed-use campus, home to Canada’s first Metastage studio as well as game studios and innovative startups in the healthcare and cloud-computing sectors.With the support of the Higher Ed XR Innovation Grant, Ethọ́s Lab and CDM aim to increase the representation of Black youth and girls in XR-based digital futures through development of an XR Media Lab program. The funding will enable the program to serve 300+ underrepresented youth and 190+ high school educators over five years.Country: CanadaUniversidad de los Andes was founded in 1948 as an autonomous and innovative institution pursuing pluralism, tolerance, and respect. It strives to raise consciousness about students’ social and civic responsibilities as well as their relationship to and stewardship of the environment.The XR Incubator Program (named “Vivero Virtual” in Spanish) is a two-year program focused on both workforce development and education innovation in XR. With Higher Ed XR Innovation Grant funding, Universidad de los Andes will launch three Massive, Open Online Courses (MOOCs) in Spanish to promote learning throughout Ibero-America. Funds will also help implement a week-long XR Camp offering Colombian educators access to a variety of XR technologies, and an XR Mobile Lab that will allow those educators to show XR technologies to the public at their own institutions.Country: ColombiaThe University of Johannesburg (UJ) and the Swiss Distance University of Applied Sciences (FFHS) aspire to design an innovative, immersive tool that addresses challenges faced by teachers in underrepresented communities. With Higher Ed XR Innovation Grant funding, their multinational team will develop and test a VR prototype for pre-service teachers (student teachers working towards their teacher certification) in South Africa.The tool will allow future teachers to have authentic teaching experiences in a safe environment, aided by learning analytics that provide opportunities to reflect on their lesson delivery and prepare them for actual teaching. The University also intends the tool to help mitigate language barriers for students whose first language is not English. Broadly, the project will empower pre-service teachers to be agents in transforming science teaching, leveraging the potential of immersive technologies and preparing students from marginalized communities with 21st-century digital skills.Country: South AfricaOn behalf of Unity Social Impact and Meta Immersive Learning, congratulations to all of our grant recipients and thank you to everyone who applied for the Higher Ed XR Innovation Grant. Learn more about educator resources and tools for propelling real-time 3D in the classroom and Meta’s $150 million investment to transform the way we learn through Meta Immersive Learning.
    0 Commentarii 0 Distribuiri
  • When a Pavilion Becomes a Living Laboratory

    this picture!© Ugo Carmeni, 2025A pavilion in a Biennale serves as a platform for cultural expression, allowing a nation to articulate its architectural identity while responding to global challenges. These national exhibitions reflect how each country interprets the event's central theme through the lens of its own landscapes, histories, and future aspirations, reinforcing architecture's ability to act not only as a built discipline, but also as a catalyst for reflection, transformation, and dialogue. In this context, Montenegro's contribution resonates with particular force. Titled Terram Intelligere: INTERSTITIUM, the pavilion draws on the concept of a newly understood anatomical system of fluid-filled spaces running throughout the human body, facilitating connection and exchange. Once considered dense and inert, the interstitium is now revealed to be a network of dynamic interrelation — a metaphor that the curators use to reframe architecture as an active, living inquiry into natural, artificial, and collective intelligence, in tune with this edition's theme: Natural. Artificial. Collective.Curated by Prof. Dr. Miljana Zeković, with contributors Ivan Šuković, Dejan Todorović, and Emir Šehanović, transforms the newly inaugurated Arte Nova space in Venice's Campo San Lorenzo into a dynamic laboratory. The project treats the interstitium not only as a biological metaphor, but as an architectural strategy, and the pavilion becomes a mediating membrane, connecting biology, tradition, and speculative futures. As Zeković explains, "Architecture naturally occupies a space between disciplines — not only between art and science, but also engineering. Here, it becomes a form of mediation between species, materials, and temporalities."Floating polycarbonate forms, infused with soil-derived bacterial cultures, are suspended by cables and arranged in a carefully orchestrated constellation. These transparent volumes are not inert, but biologically active. Over the six months of the Biennale, the microorganisms within them will grow, mutate, and generate bio-pigments in response to environmental stimuli such as light and temperature. This expanded view of architecture revisits the suvomeđa, a traditional Montenegrin dry-stone boundary wall built without mortar. More than a marker of property, the međa embodies ecological coexistence and cultural memory. In the pavilion, its principles are reinterpreted through these structures, each evoking the porosity, modularity, and autonomy of this vernacular tradition. "The međa is present in the pavilion both as a metaphor and as a symbol," observes Zeković. "Stones traditionally assembled without binding material are now reimagined in a collective, organic form — each floating, yet interconnected."
    this picture!this picture!Rethinking Intelligence Through Soil and Slow TransformationDeveloped in collaboration with the Institute of Molecular Genetics and Genetic Engineering at the University of Belgrade, the project demonstrates how soil bacteria collected from Durmitor National Park, Skadar Lake, Bukumirsko Lake, and villages near Virpazar produce bio-pigments under UV exposure. These vibrant and resilient pigments suggest a future in which ecological coloring could replace synthetic and toxic dyes in the construction industry. "The bacteria developed fascinating spatial systems," says Zeković, "which could easily be envisioned, in some modified form, applied in construction." For Zeković, this convergence between science and design reflects a deeper ambition: It is entirely realistic to expect that many industrial materials containing hazardous components will be replaced by natural, sustainable alternatives.Science, art, and architecture can help create a better, more stable, and sustainable world. The Montenegrin pavilion offers more than a visual encounter — it invites participation. As Zeković states: Visitors can engage with the space in multiple ways. The first is as a passive observer, taking in the installation from afar. The second is for the curious — those who step closer, who examine the bacterial worlds, who enter the 'boundary' and explore this unexpected potential from multiple angles. this picture!Inside these floating forms, visitors discover not only living microbial ecosystems, but also narrative templates, abstract representations of Montenegrin terrains, and even living bacterial nanocellulose. It is a multisensory and educational journey — one that encourages slow observation and critical reflection.Rather than offering a definitive solution, Terram Intelligere: INTERSTITIUM invites us to rethink how we define intelligence, materiality, and boundaries. The project asks: what can we learn from soil, from tradition, from microorganisms? In doing so, Montenegro's contribution rejects immediate spectacle in favor of slow, cellular, and continuous transformation — proposing an architecture that adapts, and grows. As the curator affirms, "The theme of intelligence and future hybrid intelligent systems demands far more than mere interpretation; it requires deep engagement on multiple levels."this picture!this picture!And she reminds us that the Biennale is more than a platform for display: "The Architecture Biennale is indeed a showcase of nations and entities, but — and this is particularly important to me — it is also a place of education. Visitors come primarily to see and to learn, and therefore, the innovations and insights presented through the exhibitions are of great significance."Just as the međa quietly delineates space while sustaining complex and invisible ecologies, the systems explored in the Montenegro Pavilion suggest that resilience does not begin with grand gestures, but with subtle, intelligent adaptation. Rooted in the soil, microorganisms model complex cooperation and environmental responsiveness, constructing spatial networks, generating living pigments, and embodying an architecture that grows from within — silently, incrementally, collectively. As the text presented in the pavilion reminds us, "Through this quiet evolution, the city of the future may rise upon a living, ever-evolving foundation – one shaped by continuous growth, shared intelligence, and the subtle emergence of hybrid systems rooted in nature."this picture!Commissioner: Mirjana ĐurišićCurator: dr Miljana ZekovićExhibitors: Ivan Šuković, Dejan Todorović, Emir ŠehanovićProfessional collaborators and project partners: Institute of Molecular Genetics and Genetic Engineering, University of Belgrade - Dr Jasmina Nikodinović-Runić, Vukašin Janković, Dr Tatjana Ilić-Tomić, Dr Ivana Aleksić, Dr Dušan MilivojevićCreative team: Tamara Marović, Maja RadonjićProducer: Jelena BožovićProject technical director: Aleksandar JevtovićTechnical production assistants: Miloš Jevtović, Branislav DragojlovićLighting – technical implementation: Boris Butorac, Jovan Vanja MarjanovićSound design: Miloš HadžićPublication design and visual identity of the exhibition: Igor MilanovićOrganizer: Ministry of Spatial Planning, Urbanism and State Property of Montenegro
    #when #pavilion #becomes #living #laboratory
    When a Pavilion Becomes a Living Laboratory
    this picture!© Ugo Carmeni, 2025A pavilion in a Biennale serves as a platform for cultural expression, allowing a nation to articulate its architectural identity while responding to global challenges. These national exhibitions reflect how each country interprets the event's central theme through the lens of its own landscapes, histories, and future aspirations, reinforcing architecture's ability to act not only as a built discipline, but also as a catalyst for reflection, transformation, and dialogue. In this context, Montenegro's contribution resonates with particular force. Titled Terram Intelligere: INTERSTITIUM, the pavilion draws on the concept of a newly understood anatomical system of fluid-filled spaces running throughout the human body, facilitating connection and exchange. Once considered dense and inert, the interstitium is now revealed to be a network of dynamic interrelation — a metaphor that the curators use to reframe architecture as an active, living inquiry into natural, artificial, and collective intelligence, in tune with this edition's theme: Natural. Artificial. Collective.Curated by Prof. Dr. Miljana Zeković, with contributors Ivan Šuković, Dejan Todorović, and Emir Šehanović, transforms the newly inaugurated Arte Nova space in Venice's Campo San Lorenzo into a dynamic laboratory. The project treats the interstitium not only as a biological metaphor, but as an architectural strategy, and the pavilion becomes a mediating membrane, connecting biology, tradition, and speculative futures. As Zeković explains, "Architecture naturally occupies a space between disciplines — not only between art and science, but also engineering. Here, it becomes a form of mediation between species, materials, and temporalities."Floating polycarbonate forms, infused with soil-derived bacterial cultures, are suspended by cables and arranged in a carefully orchestrated constellation. These transparent volumes are not inert, but biologically active. Over the six months of the Biennale, the microorganisms within them will grow, mutate, and generate bio-pigments in response to environmental stimuli such as light and temperature. This expanded view of architecture revisits the suvomeđa, a traditional Montenegrin dry-stone boundary wall built without mortar. More than a marker of property, the međa embodies ecological coexistence and cultural memory. In the pavilion, its principles are reinterpreted through these structures, each evoking the porosity, modularity, and autonomy of this vernacular tradition. "The međa is present in the pavilion both as a metaphor and as a symbol," observes Zeković. "Stones traditionally assembled without binding material are now reimagined in a collective, organic form — each floating, yet interconnected." this picture!this picture!Rethinking Intelligence Through Soil and Slow TransformationDeveloped in collaboration with the Institute of Molecular Genetics and Genetic Engineering at the University of Belgrade, the project demonstrates how soil bacteria collected from Durmitor National Park, Skadar Lake, Bukumirsko Lake, and villages near Virpazar produce bio-pigments under UV exposure. These vibrant and resilient pigments suggest a future in which ecological coloring could replace synthetic and toxic dyes in the construction industry. "The bacteria developed fascinating spatial systems," says Zeković, "which could easily be envisioned, in some modified form, applied in construction." For Zeković, this convergence between science and design reflects a deeper ambition: It is entirely realistic to expect that many industrial materials containing hazardous components will be replaced by natural, sustainable alternatives.Science, art, and architecture can help create a better, more stable, and sustainable world. The Montenegrin pavilion offers more than a visual encounter — it invites participation. As Zeković states: Visitors can engage with the space in multiple ways. The first is as a passive observer, taking in the installation from afar. The second is for the curious — those who step closer, who examine the bacterial worlds, who enter the 'boundary' and explore this unexpected potential from multiple angles. this picture!Inside these floating forms, visitors discover not only living microbial ecosystems, but also narrative templates, abstract representations of Montenegrin terrains, and even living bacterial nanocellulose. It is a multisensory and educational journey — one that encourages slow observation and critical reflection.Rather than offering a definitive solution, Terram Intelligere: INTERSTITIUM invites us to rethink how we define intelligence, materiality, and boundaries. The project asks: what can we learn from soil, from tradition, from microorganisms? In doing so, Montenegro's contribution rejects immediate spectacle in favor of slow, cellular, and continuous transformation — proposing an architecture that adapts, and grows. As the curator affirms, "The theme of intelligence and future hybrid intelligent systems demands far more than mere interpretation; it requires deep engagement on multiple levels."this picture!this picture!And she reminds us that the Biennale is more than a platform for display: "The Architecture Biennale is indeed a showcase of nations and entities, but — and this is particularly important to me — it is also a place of education. Visitors come primarily to see and to learn, and therefore, the innovations and insights presented through the exhibitions are of great significance."Just as the međa quietly delineates space while sustaining complex and invisible ecologies, the systems explored in the Montenegro Pavilion suggest that resilience does not begin with grand gestures, but with subtle, intelligent adaptation. Rooted in the soil, microorganisms model complex cooperation and environmental responsiveness, constructing spatial networks, generating living pigments, and embodying an architecture that grows from within — silently, incrementally, collectively. As the text presented in the pavilion reminds us, "Through this quiet evolution, the city of the future may rise upon a living, ever-evolving foundation – one shaped by continuous growth, shared intelligence, and the subtle emergence of hybrid systems rooted in nature."this picture!Commissioner: Mirjana ĐurišićCurator: dr Miljana ZekovićExhibitors: Ivan Šuković, Dejan Todorović, Emir ŠehanovićProfessional collaborators and project partners: Institute of Molecular Genetics and Genetic Engineering, University of Belgrade - Dr Jasmina Nikodinović-Runić, Vukašin Janković, Dr Tatjana Ilić-Tomić, Dr Ivana Aleksić, Dr Dušan MilivojevićCreative team: Tamara Marović, Maja RadonjićProducer: Jelena BožovićProject technical director: Aleksandar JevtovićTechnical production assistants: Miloš Jevtović, Branislav DragojlovićLighting – technical implementation: Boris Butorac, Jovan Vanja MarjanovićSound design: Miloš HadžićPublication design and visual identity of the exhibition: Igor MilanovićOrganizer: Ministry of Spatial Planning, Urbanism and State Property of Montenegro #when #pavilion #becomes #living #laboratory
    WWW.ARCHDAILY.COM
    When a Pavilion Becomes a Living Laboratory
    Save this picture!© Ugo Carmeni, 2025A pavilion in a Biennale serves as a platform for cultural expression, allowing a nation to articulate its architectural identity while responding to global challenges. These national exhibitions reflect how each country interprets the event's central theme through the lens of its own landscapes, histories, and future aspirations, reinforcing architecture's ability to act not only as a built discipline, but also as a catalyst for reflection, transformation, and dialogue. In this context, Montenegro's contribution resonates with particular force. Titled Terram Intelligere: INTERSTITIUM, the pavilion draws on the concept of a newly understood anatomical system of fluid-filled spaces running throughout the human body, facilitating connection and exchange. Once considered dense and inert, the interstitium is now revealed to be a network of dynamic interrelation — a metaphor that the curators use to reframe architecture as an active, living inquiry into natural, artificial, and collective intelligence, in tune with this edition's theme: Natural. Artificial. Collective.Curated by Prof. Dr. Miljana Zeković, with contributors Ivan Šuković, Dejan Todorović, and Emir Šehanović, transforms the newly inaugurated Arte Nova space in Venice's Campo San Lorenzo into a dynamic laboratory. The project treats the interstitium not only as a biological metaphor, but as an architectural strategy, and the pavilion becomes a mediating membrane, connecting biology, tradition, and speculative futures. As Zeković explains, "Architecture naturally occupies a space between disciplines — not only between art and science, but also engineering. Here, it becomes a form of mediation between species, materials, and temporalities."Floating polycarbonate forms, infused with soil-derived bacterial cultures, are suspended by cables and arranged in a carefully orchestrated constellation. These transparent volumes are not inert, but biologically active. Over the six months of the Biennale, the microorganisms within them will grow, mutate, and generate bio-pigments in response to environmental stimuli such as light and temperature. This expanded view of architecture revisits the suvomeđa, a traditional Montenegrin dry-stone boundary wall built without mortar. More than a marker of property, the međa embodies ecological coexistence and cultural memory. In the pavilion, its principles are reinterpreted through these structures, each evoking the porosity, modularity, and autonomy of this vernacular tradition. "The međa is present in the pavilion both as a metaphor and as a symbol," observes Zeković. "Stones traditionally assembled without binding material are now reimagined in a collective, organic form — each floating, yet interconnected." Save this picture!Save this picture!Rethinking Intelligence (and Color) Through Soil and Slow TransformationDeveloped in collaboration with the Institute of Molecular Genetics and Genetic Engineering at the University of Belgrade, the project demonstrates how soil bacteria collected from Durmitor National Park, Skadar Lake, Bukumirsko Lake, and villages near Virpazar produce bio-pigments under UV exposure. These vibrant and resilient pigments suggest a future in which ecological coloring could replace synthetic and toxic dyes in the construction industry. "The bacteria developed fascinating spatial systems," says Zeković, "which could easily be envisioned, in some modified form, applied in construction." For Zeković, this convergence between science and design reflects a deeper ambition: It is entirely realistic to expect that many industrial materials containing hazardous components will be replaced by natural, sustainable alternatives. (...) Science, art, and architecture can help create a better, more stable, and sustainable world. The Montenegrin pavilion offers more than a visual encounter — it invites participation. As Zeković states: Visitors can engage with the space in multiple ways. The first is as a passive observer, taking in the installation from afar. The second is for the curious — those who step closer, who examine the bacterial worlds, who enter the 'boundary' and explore this unexpected potential from multiple angles. Save this picture!Inside these floating forms, visitors discover not only living microbial ecosystems, but also narrative templates, abstract representations of Montenegrin terrains, and even living bacterial nanocellulose. It is a multisensory and educational journey — one that encourages slow observation and critical reflection.Rather than offering a definitive solution, Terram Intelligere: INTERSTITIUM invites us to rethink how we define intelligence, materiality, and boundaries. The project asks: what can we learn from soil, from tradition, from microorganisms? In doing so, Montenegro's contribution rejects immediate spectacle in favor of slow, cellular, and continuous transformation — proposing an architecture that adapts, and grows. As the curator affirms, "The theme of intelligence and future hybrid intelligent systems demands far more than mere interpretation; it requires deep engagement on multiple levels."Save this picture!Save this picture!And she reminds us that the Biennale is more than a platform for display: "The Architecture Biennale is indeed a showcase of nations and entities, but — and this is particularly important to me — it is also a place of education. Visitors come primarily to see and to learn, and therefore, the innovations and insights presented through the exhibitions are of great significance."Just as the međa quietly delineates space while sustaining complex and invisible ecologies, the systems explored in the Montenegro Pavilion suggest that resilience does not begin with grand gestures, but with subtle, intelligent adaptation. Rooted in the soil, microorganisms model complex cooperation and environmental responsiveness, constructing spatial networks, generating living pigments, and embodying an architecture that grows from within — silently, incrementally, collectively. As the text presented in the pavilion reminds us, "Through this quiet evolution, the city of the future may rise upon a living, ever-evolving foundation – one shaped by continuous growth, shared intelligence, and the subtle emergence of hybrid systems rooted in nature."Save this picture!Commissioner: Mirjana ĐurišićCurator: dr Miljana ZekovićExhibitors: Ivan Šuković, Dejan Todorović, Emir ŠehanovićProfessional collaborators and project partners: Institute of Molecular Genetics and Genetic Engineering (IMGGE), University of Belgrade - Dr Jasmina Nikodinović-Runić, Vukašin Janković, Dr Tatjana Ilić-Tomić, Dr Ivana Aleksić, Dr Dušan MilivojevićCreative team: Tamara Marović, Maja RadonjićProducer: Jelena BožovićProject technical director: Aleksandar JevtovićTechnical production assistants: Miloš Jevtović, Branislav DragojlovićLighting – technical implementation: Boris Butorac, Jovan Vanja MarjanovićSound design: Miloš HadžićPublication design and visual identity of the exhibition: Igor MilanovićOrganizer: Ministry of Spatial Planning, Urbanism and State Property of Montenegro
    0 Commentarii 0 Distribuiri
  • How 3D printing is personalizing health care

    New print jobs

    How 3D printing is personalizing health care

    Prosthetics are becoming increasing affordable and accessible thanks to 3D printers.

    Anne Schmitz and Daniel Freedman, The Conversation



    May 20, 2025 5:43 pm

    |

    20

    German Chancellor Olaf Scholz shakes hands with the prosthetic hand of a worker of the German med-tech company Ottobock.

    Credit:

    JOHN MACDOUGALL/AFP via Getty Images

    German Chancellor Olaf Scholz shakes hands with the prosthetic hand of a worker of the German med-tech company Ottobock.

    Credit:

    JOHN MACDOUGALL/AFP via Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Three-dimensional printing is transforming medical care, letting the health care field shift from mass-produced solutions to customized treatments tailored to each patient’s needs. For instance, researchers are developing 3D-printed prosthetic hands specifically designed for children, made with lightweight materials and adaptable control systems.
    These continuing advancements in 3D-printed prosthetics demonstrate their increasing affordability and accessibility. Success stories like this one in personalized prosthetics highlight the benefits of 3D printing, in which a model of an object produced with computer-aided design software is transferred to a 3D printer and constructed layer by layer.
    We are a biomedical engineer and a chemist who work with 3D printing. We study how this rapidly evolving technology provides new options not just for prosthetics but for implants, surgical planning, drug manufacturing, and other health care needs. The ability of 3D printing to make precisely shaped objects in a wide range of materials has led to, for example, custom replacement joints and custom-dosage, multidrug pills.
    Better body parts
    Three-dimensional printing in health care started in the 1980s with scientists using technologies such as stereolithography to create prototypes layer by layer. Stereolithography uses a computer-controlled laser beam to solidify a liquid material into specific 3D shapes. The medical field quickly saw the potential of this technology to create implants and prosthetics designed specifically for each patient.
    One of the first applications was creating tissue scaffolds, which are structures that support cell growth. Researchers at Boston Children’s Hospital combined these scaffolds with patients’ own cells to build replacement bladders. The patients remained healthy for years after receiving their implants, demonstrating that 3D-printed structures could become durable body parts.

    As technology progressed, the focus shifted to bioprinting, which uses living cells to create working anatomical structures. In 2013, Organovo created the world’s first 3D-bioprinted liver tissue, opening up exciting possibilities for creating organs and tissues for transplantation. But while significant advances have been made in bioprinting, creating full, functional organs such as livers for transplantation remains experimental. Current research focuses on developing smaller, simpler tissues and refining bioprinting techniques to improve cell viability and functionality. These efforts aim to bridge the gap between laboratory success and clinical application, with the ultimate goal of providing viable organ replacements for patients in need.
    Three-dimensional printing already has revolutionized the creation of prosthetics. It allows prosthetics makers to produce affordable custom-made devices that fit the patient perfectly. They can tailor prosthetic hands and limbs to each individual and easily replace them as a child grows.
    Three-dimensionally printed implants, such as hip replacements and spine implants, offer a more precise fit, which can improve how well they integrate with the body. Traditional implants often come only in standard shapes and sizes.
    Some patients have received custom titanium facial implants after accidents. Others had portions of their skulls replaced with 3D-printed implants.
    Additionally, 3D printing is making significant strides in dentistry. Companies such as Invisalign use 3D printing to create custom-fit aligners for teeth straightening, demonstrating the ability to personalize dental care.
    Scientists are also exploring new materials for 3D printing, such as self-healing bioglass that might replace damaged cartilage. Moreover, researchers are developing 4D printing, which creates objects that can change shape over time, potentially leading to medical devices that can adapt to the body’s needs.

    For example, researchers are working on 3D-printed stents that can respond to changes in blood flow. These stents are designed to expand or contract as needed, reducing the risk of blockage and improving long-term patient outcomes.
    Simulating surgeries
    Three-dimensionally printed anatomical models often help surgeons understand complex cases and improve surgical outcomes. These models, created from medical images such as X-rays and CT scans, allow surgeons to practice procedures before operating.
    For instance, a 3D-printed model of a child’s heart enables surgeons to simulate complex surgeries. This approach can lead to shorter operating times, fewer complications, and lower costs.

    Personalized pharmaceuticals
    In the pharmaceutical industry, drugmakers can three-dimensionally print personalized drug dosages and delivery systems. The ability to precisely layer each component of a drug means that they can make medicines with the exact dose needed for each patient. The 3D-printed anti-epileptic drug Spritam was approved by the Food and Drug Administration in 2015 to deliver very high dosages of its active ingredient.
    Drug production systems that use 3D printing are finding homes outside pharmaceutical factories. The drugs potentially can be made and delivered by community pharmacies. Hospitals are starting to use 3D printing to make medicine on-site, allowing for personalized treatment plans based on factors such as the patient’s age and health.
    However, it’s important to note that regulations for 3D-printed drugs are still being developed. One concern is that postprinting processing may affect the stability of drug ingredients. It’s also important to establish clear guidelines and decide where 3D printing should take place – whether in pharmacies, hospitals or even at home. Additionally, pharmacists will need rigorous training in these new systems.

    Printing for the future
    Despite the extraordinarily rapid progress overall in 3D printing for health care, major challenges and opportunities remain. Among them is the need to develop better ways to ensure the quality and safety of 3D-printed medical products. Affordability and accessibility also remain significant concerns. Long-term safety concerns regarding implant materials, such as potential biocompatibility issues and the release of nanoparticles, require rigorous testing and validation.
    While 3D printing has the potential to reduce manufacturing costs, the initial investment in equipment and materials can be a barrier for many health care providers and patients, especially in underserved communities. Furthermore, the lack of standardized workflows and trained personnel can limit the widespread adoption of 3D printing in clinical settings, hindering access for those who could benefit most.
    On the bright side, artificial intelligence techniques that can effectively leverage vast amounts of highly detailed medical data are likely to prove critical in developing improved 3D-printed medical products. Specifically, AI algorithms can analyze patient-specific data to optimize the design and fabrication of 3D-printed implants and prosthetics. For instance, implant makers can use AI-driven image analysis to create highly accurate 3D models from CT scans and MRIs that they can use to design customized implants.
    Furthermore, machine learning algorithms can predict the long-term performance and potential failure points of 3D-printed prosthetics, allowing prosthetics designers to optimize for improved durability and patient safety.
    Three-dimensional printing continues to break boundaries, including the boundary of the body itself. Researchers at the California Institute of Technology have developed a technique that uses ultrasound to turn a liquid injected into the body into a gel in 3D shapes. The method could be used one day for delivering drugs or replacing tissue.
    Overall, the field is moving quickly toward personalized treatment plans that are closely adapted to each patient’s unique needs and preferences, made possible by the precision and flexibility of 3D printing.
    Anne Schmitz, Associate Professor of Engineering, University of Wisconsin-Stout and Daniel Freedman, Dean of the College of Science, Technology, Engineering, Mathematics & Management, University of Wisconsin-Stout. This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Anne Schmitz and Daniel Freedman, The Conversation

    The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.

    20 Comments
    #how #printing #personalizing #healthcare
    How 3D printing is personalizing health care
    New print jobs How 3D printing is personalizing health care Prosthetics are becoming increasing affordable and accessible thanks to 3D printers. Anne Schmitz and Daniel Freedman, The Conversation – May 20, 2025 5:43 pm | 20 German Chancellor Olaf Scholz shakes hands with the prosthetic hand of a worker of the German med-tech company Ottobock. Credit: JOHN MACDOUGALL/AFP via Getty Images German Chancellor Olaf Scholz shakes hands with the prosthetic hand of a worker of the German med-tech company Ottobock. Credit: JOHN MACDOUGALL/AFP via Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Three-dimensional printing is transforming medical care, letting the health care field shift from mass-produced solutions to customized treatments tailored to each patient’s needs. For instance, researchers are developing 3D-printed prosthetic hands specifically designed for children, made with lightweight materials and adaptable control systems. These continuing advancements in 3D-printed prosthetics demonstrate their increasing affordability and accessibility. Success stories like this one in personalized prosthetics highlight the benefits of 3D printing, in which a model of an object produced with computer-aided design software is transferred to a 3D printer and constructed layer by layer. We are a biomedical engineer and a chemist who work with 3D printing. We study how this rapidly evolving technology provides new options not just for prosthetics but for implants, surgical planning, drug manufacturing, and other health care needs. The ability of 3D printing to make precisely shaped objects in a wide range of materials has led to, for example, custom replacement joints and custom-dosage, multidrug pills. Better body parts Three-dimensional printing in health care started in the 1980s with scientists using technologies such as stereolithography to create prototypes layer by layer. Stereolithography uses a computer-controlled laser beam to solidify a liquid material into specific 3D shapes. The medical field quickly saw the potential of this technology to create implants and prosthetics designed specifically for each patient. One of the first applications was creating tissue scaffolds, which are structures that support cell growth. Researchers at Boston Children’s Hospital combined these scaffolds with patients’ own cells to build replacement bladders. The patients remained healthy for years after receiving their implants, demonstrating that 3D-printed structures could become durable body parts. As technology progressed, the focus shifted to bioprinting, which uses living cells to create working anatomical structures. In 2013, Organovo created the world’s first 3D-bioprinted liver tissue, opening up exciting possibilities for creating organs and tissues for transplantation. But while significant advances have been made in bioprinting, creating full, functional organs such as livers for transplantation remains experimental. Current research focuses on developing smaller, simpler tissues and refining bioprinting techniques to improve cell viability and functionality. These efforts aim to bridge the gap between laboratory success and clinical application, with the ultimate goal of providing viable organ replacements for patients in need. Three-dimensional printing already has revolutionized the creation of prosthetics. It allows prosthetics makers to produce affordable custom-made devices that fit the patient perfectly. They can tailor prosthetic hands and limbs to each individual and easily replace them as a child grows. Three-dimensionally printed implants, such as hip replacements and spine implants, offer a more precise fit, which can improve how well they integrate with the body. Traditional implants often come only in standard shapes and sizes. Some patients have received custom titanium facial implants after accidents. Others had portions of their skulls replaced with 3D-printed implants. Additionally, 3D printing is making significant strides in dentistry. Companies such as Invisalign use 3D printing to create custom-fit aligners for teeth straightening, demonstrating the ability to personalize dental care. Scientists are also exploring new materials for 3D printing, such as self-healing bioglass that might replace damaged cartilage. Moreover, researchers are developing 4D printing, which creates objects that can change shape over time, potentially leading to medical devices that can adapt to the body’s needs. For example, researchers are working on 3D-printed stents that can respond to changes in blood flow. These stents are designed to expand or contract as needed, reducing the risk of blockage and improving long-term patient outcomes. Simulating surgeries Three-dimensionally printed anatomical models often help surgeons understand complex cases and improve surgical outcomes. These models, created from medical images such as X-rays and CT scans, allow surgeons to practice procedures before operating. For instance, a 3D-printed model of a child’s heart enables surgeons to simulate complex surgeries. This approach can lead to shorter operating times, fewer complications, and lower costs. Personalized pharmaceuticals In the pharmaceutical industry, drugmakers can three-dimensionally print personalized drug dosages and delivery systems. The ability to precisely layer each component of a drug means that they can make medicines with the exact dose needed for each patient. The 3D-printed anti-epileptic drug Spritam was approved by the Food and Drug Administration in 2015 to deliver very high dosages of its active ingredient. Drug production systems that use 3D printing are finding homes outside pharmaceutical factories. The drugs potentially can be made and delivered by community pharmacies. Hospitals are starting to use 3D printing to make medicine on-site, allowing for personalized treatment plans based on factors such as the patient’s age and health. However, it’s important to note that regulations for 3D-printed drugs are still being developed. One concern is that postprinting processing may affect the stability of drug ingredients. It’s also important to establish clear guidelines and decide where 3D printing should take place – whether in pharmacies, hospitals or even at home. Additionally, pharmacists will need rigorous training in these new systems. Printing for the future Despite the extraordinarily rapid progress overall in 3D printing for health care, major challenges and opportunities remain. Among them is the need to develop better ways to ensure the quality and safety of 3D-printed medical products. Affordability and accessibility also remain significant concerns. Long-term safety concerns regarding implant materials, such as potential biocompatibility issues and the release of nanoparticles, require rigorous testing and validation. While 3D printing has the potential to reduce manufacturing costs, the initial investment in equipment and materials can be a barrier for many health care providers and patients, especially in underserved communities. Furthermore, the lack of standardized workflows and trained personnel can limit the widespread adoption of 3D printing in clinical settings, hindering access for those who could benefit most. On the bright side, artificial intelligence techniques that can effectively leverage vast amounts of highly detailed medical data are likely to prove critical in developing improved 3D-printed medical products. Specifically, AI algorithms can analyze patient-specific data to optimize the design and fabrication of 3D-printed implants and prosthetics. For instance, implant makers can use AI-driven image analysis to create highly accurate 3D models from CT scans and MRIs that they can use to design customized implants. Furthermore, machine learning algorithms can predict the long-term performance and potential failure points of 3D-printed prosthetics, allowing prosthetics designers to optimize for improved durability and patient safety. Three-dimensional printing continues to break boundaries, including the boundary of the body itself. Researchers at the California Institute of Technology have developed a technique that uses ultrasound to turn a liquid injected into the body into a gel in 3D shapes. The method could be used one day for delivering drugs or replacing tissue. Overall, the field is moving quickly toward personalized treatment plans that are closely adapted to each patient’s unique needs and preferences, made possible by the precision and flexibility of 3D printing. Anne Schmitz, Associate Professor of Engineering, University of Wisconsin-Stout and Daniel Freedman, Dean of the College of Science, Technology, Engineering, Mathematics & Management, University of Wisconsin-Stout. This article is republished from The Conversation under a Creative Commons license. Read the original article. Anne Schmitz and Daniel Freedman, The Conversation The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them. 20 Comments #how #printing #personalizing #healthcare
    ARSTECHNICA.COM
    How 3D printing is personalizing health care
    New print jobs How 3D printing is personalizing health care Prosthetics are becoming increasing affordable and accessible thanks to 3D printers. Anne Schmitz and Daniel Freedman, The Conversation – May 20, 2025 5:43 pm | 20 German Chancellor Olaf Scholz shakes hands with the prosthetic hand of a worker of the German med-tech company Ottobock. Credit: JOHN MACDOUGALL/AFP via Getty Images German Chancellor Olaf Scholz shakes hands with the prosthetic hand of a worker of the German med-tech company Ottobock. Credit: JOHN MACDOUGALL/AFP via Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Three-dimensional printing is transforming medical care, letting the health care field shift from mass-produced solutions to customized treatments tailored to each patient’s needs. For instance, researchers are developing 3D-printed prosthetic hands specifically designed for children, made with lightweight materials and adaptable control systems. These continuing advancements in 3D-printed prosthetics demonstrate their increasing affordability and accessibility. Success stories like this one in personalized prosthetics highlight the benefits of 3D printing, in which a model of an object produced with computer-aided design software is transferred to a 3D printer and constructed layer by layer. We are a biomedical engineer and a chemist who work with 3D printing. We study how this rapidly evolving technology provides new options not just for prosthetics but for implants, surgical planning, drug manufacturing, and other health care needs. The ability of 3D printing to make precisely shaped objects in a wide range of materials has led to, for example, custom replacement joints and custom-dosage, multidrug pills. Better body parts Three-dimensional printing in health care started in the 1980s with scientists using technologies such as stereolithography to create prototypes layer by layer. Stereolithography uses a computer-controlled laser beam to solidify a liquid material into specific 3D shapes. The medical field quickly saw the potential of this technology to create implants and prosthetics designed specifically for each patient. One of the first applications was creating tissue scaffolds, which are structures that support cell growth. Researchers at Boston Children’s Hospital combined these scaffolds with patients’ own cells to build replacement bladders. The patients remained healthy for years after receiving their implants, demonstrating that 3D-printed structures could become durable body parts. As technology progressed, the focus shifted to bioprinting, which uses living cells to create working anatomical structures. In 2013, Organovo created the world’s first 3D-bioprinted liver tissue, opening up exciting possibilities for creating organs and tissues for transplantation. But while significant advances have been made in bioprinting, creating full, functional organs such as livers for transplantation remains experimental. Current research focuses on developing smaller, simpler tissues and refining bioprinting techniques to improve cell viability and functionality. These efforts aim to bridge the gap between laboratory success and clinical application, with the ultimate goal of providing viable organ replacements for patients in need. Three-dimensional printing already has revolutionized the creation of prosthetics. It allows prosthetics makers to produce affordable custom-made devices that fit the patient perfectly. They can tailor prosthetic hands and limbs to each individual and easily replace them as a child grows. Three-dimensionally printed implants, such as hip replacements and spine implants, offer a more precise fit, which can improve how well they integrate with the body. Traditional implants often come only in standard shapes and sizes. Some patients have received custom titanium facial implants after accidents. Others had portions of their skulls replaced with 3D-printed implants. Additionally, 3D printing is making significant strides in dentistry. Companies such as Invisalign use 3D printing to create custom-fit aligners for teeth straightening, demonstrating the ability to personalize dental care. Scientists are also exploring new materials for 3D printing, such as self-healing bioglass that might replace damaged cartilage. Moreover, researchers are developing 4D printing, which creates objects that can change shape over time, potentially leading to medical devices that can adapt to the body’s needs. For example, researchers are working on 3D-printed stents that can respond to changes in blood flow. These stents are designed to expand or contract as needed, reducing the risk of blockage and improving long-term patient outcomes. Simulating surgeries Three-dimensionally printed anatomical models often help surgeons understand complex cases and improve surgical outcomes. These models, created from medical images such as X-rays and CT scans, allow surgeons to practice procedures before operating. For instance, a 3D-printed model of a child’s heart enables surgeons to simulate complex surgeries. This approach can lead to shorter operating times, fewer complications, and lower costs. Personalized pharmaceuticals In the pharmaceutical industry, drugmakers can three-dimensionally print personalized drug dosages and delivery systems. The ability to precisely layer each component of a drug means that they can make medicines with the exact dose needed for each patient. The 3D-printed anti-epileptic drug Spritam was approved by the Food and Drug Administration in 2015 to deliver very high dosages of its active ingredient. Drug production systems that use 3D printing are finding homes outside pharmaceutical factories. The drugs potentially can be made and delivered by community pharmacies. Hospitals are starting to use 3D printing to make medicine on-site, allowing for personalized treatment plans based on factors such as the patient’s age and health. However, it’s important to note that regulations for 3D-printed drugs are still being developed. One concern is that postprinting processing may affect the stability of drug ingredients. It’s also important to establish clear guidelines and decide where 3D printing should take place – whether in pharmacies, hospitals or even at home. Additionally, pharmacists will need rigorous training in these new systems. Printing for the future Despite the extraordinarily rapid progress overall in 3D printing for health care, major challenges and opportunities remain. Among them is the need to develop better ways to ensure the quality and safety of 3D-printed medical products. Affordability and accessibility also remain significant concerns. Long-term safety concerns regarding implant materials, such as potential biocompatibility issues and the release of nanoparticles, require rigorous testing and validation. While 3D printing has the potential to reduce manufacturing costs, the initial investment in equipment and materials can be a barrier for many health care providers and patients, especially in underserved communities. Furthermore, the lack of standardized workflows and trained personnel can limit the widespread adoption of 3D printing in clinical settings, hindering access for those who could benefit most. On the bright side, artificial intelligence techniques that can effectively leverage vast amounts of highly detailed medical data are likely to prove critical in developing improved 3D-printed medical products. Specifically, AI algorithms can analyze patient-specific data to optimize the design and fabrication of 3D-printed implants and prosthetics. For instance, implant makers can use AI-driven image analysis to create highly accurate 3D models from CT scans and MRIs that they can use to design customized implants. Furthermore, machine learning algorithms can predict the long-term performance and potential failure points of 3D-printed prosthetics, allowing prosthetics designers to optimize for improved durability and patient safety. Three-dimensional printing continues to break boundaries, including the boundary of the body itself. Researchers at the California Institute of Technology have developed a technique that uses ultrasound to turn a liquid injected into the body into a gel in 3D shapes. The method could be used one day for delivering drugs or replacing tissue. Overall, the field is moving quickly toward personalized treatment plans that are closely adapted to each patient’s unique needs and preferences, made possible by the precision and flexibility of 3D printing. Anne Schmitz, Associate Professor of Engineering, University of Wisconsin-Stout and Daniel Freedman, Dean of the College of Science, Technology, Engineering, Mathematics & Management, University of Wisconsin-Stout. This article is republished from The Conversation under a Creative Commons license. Read the original article. Anne Schmitz and Daniel Freedman, The Conversation The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them. 20 Comments
    0 Commentarii 0 Distribuiri
  • [INTERVIEW] 3D Printing at Boston Children’s Hospital: Engineering the Future of Pediatric Surgery

    Inside Boston Children’s Hospital, 3D printing and digital planning are “transforming” pediatric care. The 156-year-old institution uses Materialise’s Mimics software to turn two-dimensional patient scans into detailed 3D models, streamlining preoperative planning and enhancing surgical outcomes.   
    Dr. David Hoganson, a pediatric cardiac surgeon at the Massachusetts-based health center, called 3D technology a “total game-changer” for clinicians. Speaking at the Materialise 3D Printing in Hospitals Forum 2025, he outlined his role in leading the hospital’s Cardiovascular 3D Modelling and Simulation Program.
    Mimics has become part of routine care at Boston Children’s Benderson Family Heart Center. Since 2018, the Program’s engineers and clinicians have created over 1,600 patient-specific 3D models. Last year alone, the team made 483 models, which accounted for about 50% of its Operating Room surgical cases.    
    During Materialise’s healthcare forum, I spoke with Dr. Hoganson about his unique path from biomedical engineering to clinical practice. The Temple University graduate outlined how 3D modeling is no longer a futuristic add-on but an essential tool transforming the precision and planning of modern surgery.
    He revealed the tangible benefits of Mimics modelling versus traditional medical imaging, emphasizing how intraoperative 3D planning can reduce heart surgery complications by up to 87%. Looking to the future of healthcare, Dr. Hoganson discussed the need for more seamless clinical integration, validation, and financial reimbursement to increase adoption.   
    Dr. David Hoganson speaking at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise.
    From the factory floor to the operating theater
    When Dr. Hoganson began his career, it wasn’t in an operating room but on the factory floor. He started as a biomedical engineer, developing cardiovascular medical devices for two years, before transitioning to medicine. 
    This pedigree has been instrumental in shaping Boston Children’s 3D Modeling and Simulation Program, which was co-founded by University of New Hampshire Mechanical Engineering graduate Dr. Peter Hammer. The team has grown to include 17 engineers and one clinical nurse. “It has been an engineering-focused effort from the beginning,” explained Dr. Hoganson. He emphasized that the team prioritizes using “advanced engineering analysis” to plan and conduct ultra-precise operations. 
    Dr. Hoganson believes this engineering focus challenges the structured nature of clinical medicine. “The mindset of medicine is much more focused on doing things the way we were taught,” he explains. In contrast, engineering embraces constant iteration, creating space for innovation and rethinking established practices.
    He argued that engineers are not “held back by the way medicine has always been done,” which makes them an invaluable asset in clinical settings. When engineers deeply understand clinical challenges and apply their analytical skills, they often uncover solutions that physicians may not have considered, he added. These range from optimized surgical workflows to entirely new approaches to preoperative planning. For Dr. Hoganson, the “secret sauce” lies in collaboration and ensuring “zero distance between the engineers and the problem.”
    Dr. David Hoganson speaking at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise.
    3D printing and digital planning enhance surgical outcomes 
    In pediatric cardiac surgery, speed matters. According to Dr. Hoganson, this is why digital 3D modeling takes priority in pre-operative planning and intraoperative guidance. Materialise’s Mimics software streamlines this process. Users can import CT and MRI data, which is automatically transformed into detailed, interactive 3D models. Surgeons can then run simulations and apply computational fluid dynamics to forecast the most effective treatment strategies.
    Boston Children’s 3D simulation lead described these capabilities as offering “tremendous benefits” beyond what traditional imaging alone can provide. Traditional scans are viewed in stacks of two-dimensional slices. Whereas, Mimics 3D models offer virtual segmentation, interior views, and precise spatial mapping. Dr. Hoganson called this a “difference maker” and “totally transformational” for surgeons. 
    Dr. Hoganson’s team uses this technology to perform a range of complex cardiovascular repairs, such as reconstructing aortic and mitral valves, closing ventricular septal defects, and augmenting blood vessels, including pulmonary arteries and aortas. Materialise Mimics’ value is not limited to preoperative preparation. It also guides surgical procedures. During operations, clinicians can interact with the models using repurposed gaming controllers, allowing them to explore and isolate anatomical features in the operating theater. 
    One key breakthrough has been identifying and mapping the heart’s electrical system, which governs its rhythm. By integrating 3D modelling into intraoperative planning, surgeons have significantly reduced the risk of heart block, where electrical signals are delayed as they pass through the organ. With the help of Mimics software, incidence rates have fallen from 40% to as low as 5% in some cases.     
    Given the advantages of digital modelling, surgeons might be tempted to sideline physical 3D printing altogether. However, Dr. Hoganson insists additive manufacturing remains vital to refining surgical workflows. His team conducts a “tremendous amount of 3D printing,” creating patient-specific anatomical models, mostly with a resin-based Formlabs system. These models allow clinicians to test and validate plans in the lab before donning their scrubs.
    Boston Children’s has sharpened its surgical edge by using materials that closely replicate the mechanical properties of target tissues. This allows the team to 3D print anatomical models tailored to each child’s size, age, and physical makeup.
    For instance, Dr. Hoganson’s team can fabricate neonatal-sized aortas and pulmonary arteries that replicate the texture and elasticity of an infant’s vessels. Developed over several years, this approach enables accurate simulation of complex procedures, such as patch enlargement of pulmonary arteries. The team conducts rigorous preclinical testing by combining anatomical precision with lifelike tissue mechanics. 
    Dr. Hoganson explained that in-depth testing is crucial for refining techniques, reducing surgical risk, and minimizing complications in pediatric patients. This, in turn, slashes healthcare costs as fewer children spend extended time in the ICU following procedures. 3D planning and simulation empower surgeons to “do things right the first time, so we can reduce those reinterventions and complications,” Dr. Hoganson added.        
    Dr. David Hoganson demonstrating cardiovascular 3D models at the Materialise 3D Printing in Hospitals Forum 2025. Photo by 3D Printing Industry.
    Overcoming challenges to adoption in hospitals   
    What key challenges are limiting clinical adoption of 3D technology? For Dr. Hoganson, cost remains a critical barrier. “Having the efforts reimbursed will be a very important piece of this,” he explained. “That enables teams to grow and have the manpower to do it,” when 3D planning is clinically necessary. In the US, medical reimbursement involves a long path to approval. But progress is being made. His team has started billing successfully for some aspects of the work, marking an “encouraging start” toward broader systemic change.
    Adoption also hinges on easier integration into existing workflows. Dr. Hoganson noted that if 3D technology adds efforts and time to procedures, it won’t be chosen over existing methods. Therefore, “the more streamlined you can make the whole process for the physician, the more likely they are to adopt it.” 
    In response to these demands, Boston Children’s 3D Modelling and Simulation Program has designed a system that feels familiar to surgeons. “It’s not just about providing the technical aspects of the 3D model,” added Dr. Hoganson. “It’s about integrating the whole process into the clinical workflow in a way that works for the clinician.” 
    His team works at the center of these efforts, ensuring “there’s almost no barrier of entry to find and use the model they need.” Dr. Hoganson claims to have simplified the process to the stage where it looks and feels like regular medical care, removing the mystique and misconceptions around 3D technology. “There’s nothing special about it anymore,” he added. “That’s been a huge step towards this technology being a part of routine medical care.”  
    Boston Children’s integration strategy is working. The team expects to use 3D models in around 60% of heart surgeries this year. However, making 3D technology a standard of care has not been easy. Dr. Horganson said, “It has taken a very diligent effort to remove those barriers.” 
    In the broader tech space, 3D printing has sometimes suffered from overpromising and underdelivering, a pattern Dr. David Hoganson is keen to avoid. “We’ve tried to be extremely transparent with what is and is not being delivered,” he added. That clarity is crucial for building trust. A 3D model alone, for instance, serves a vital but defined role: enhanced visualization and preoperative measurements. Hoganson emphasized that 3D printing is not a miracle cure, but another tool in a surgeon’s toolbox. 
    For Boston Children’s, the future of 3D printing in healthcare lies beyond static models. Dr. Horganson believes additive manufacturing will be a basis for “taking the next step and impacting how surgery is conducted, and how precisely and perfectly it’s done the first time.”
    Over the next eighteen months, Dr. Hoganson’s team will double down on demonstrating how preoperative 3D modeling translates into better surgical procedures. This will include measuring outcomes from surgeries using 3D technology and assessing whether predictions have matched surgical results. He believes validating outcomes will be an “important step forward” in moving 3D modeling from supportive technology to an indispensable clinical standard.
    The number of patient-specific digital 3D models created annually at Boston Children’s Hospital’s Benderson Family Heart Center since 2018. Photo by 3D Printing Industry.
    Take the 3DPI Reader Survey – shape the future of AM reporting in under 5 minutes.
    Read all the 3D printing news from RAPID + TCT 2025
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows Dr. David Hoganson speaking at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise.
    #interview #printing #boston #childrens #hospital
    [INTERVIEW] 3D Printing at Boston Children’s Hospital: Engineering the Future of Pediatric Surgery
    Inside Boston Children’s Hospital, 3D printing and digital planning are “transforming” pediatric care. The 156-year-old institution uses Materialise’s Mimics software to turn two-dimensional patient scans into detailed 3D models, streamlining preoperative planning and enhancing surgical outcomes.    Dr. David Hoganson, a pediatric cardiac surgeon at the Massachusetts-based health center, called 3D technology a “total game-changer” for clinicians. Speaking at the Materialise 3D Printing in Hospitals Forum 2025, he outlined his role in leading the hospital’s Cardiovascular 3D Modelling and Simulation Program. Mimics has become part of routine care at Boston Children’s Benderson Family Heart Center. Since 2018, the Program’s engineers and clinicians have created over 1,600 patient-specific 3D models. Last year alone, the team made 483 models, which accounted for about 50% of its Operating Room surgical cases.     During Materialise’s healthcare forum, I spoke with Dr. Hoganson about his unique path from biomedical engineering to clinical practice. The Temple University graduate outlined how 3D modeling is no longer a futuristic add-on but an essential tool transforming the precision and planning of modern surgery. He revealed the tangible benefits of Mimics modelling versus traditional medical imaging, emphasizing how intraoperative 3D planning can reduce heart surgery complications by up to 87%. Looking to the future of healthcare, Dr. Hoganson discussed the need for more seamless clinical integration, validation, and financial reimbursement to increase adoption.    Dr. David Hoganson speaking at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. From the factory floor to the operating theater When Dr. Hoganson began his career, it wasn’t in an operating room but on the factory floor. He started as a biomedical engineer, developing cardiovascular medical devices for two years, before transitioning to medicine.  This pedigree has been instrumental in shaping Boston Children’s 3D Modeling and Simulation Program, which was co-founded by University of New Hampshire Mechanical Engineering graduate Dr. Peter Hammer. The team has grown to include 17 engineers and one clinical nurse. “It has been an engineering-focused effort from the beginning,” explained Dr. Hoganson. He emphasized that the team prioritizes using “advanced engineering analysis” to plan and conduct ultra-precise operations.  Dr. Hoganson believes this engineering focus challenges the structured nature of clinical medicine. “The mindset of medicine is much more focused on doing things the way we were taught,” he explains. In contrast, engineering embraces constant iteration, creating space for innovation and rethinking established practices. He argued that engineers are not “held back by the way medicine has always been done,” which makes them an invaluable asset in clinical settings. When engineers deeply understand clinical challenges and apply their analytical skills, they often uncover solutions that physicians may not have considered, he added. These range from optimized surgical workflows to entirely new approaches to preoperative planning. For Dr. Hoganson, the “secret sauce” lies in collaboration and ensuring “zero distance between the engineers and the problem.” Dr. David Hoganson speaking at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. 3D printing and digital planning enhance surgical outcomes  In pediatric cardiac surgery, speed matters. According to Dr. Hoganson, this is why digital 3D modeling takes priority in pre-operative planning and intraoperative guidance. Materialise’s Mimics software streamlines this process. Users can import CT and MRI data, which is automatically transformed into detailed, interactive 3D models. Surgeons can then run simulations and apply computational fluid dynamics to forecast the most effective treatment strategies. Boston Children’s 3D simulation lead described these capabilities as offering “tremendous benefits” beyond what traditional imaging alone can provide. Traditional scans are viewed in stacks of two-dimensional slices. Whereas, Mimics 3D models offer virtual segmentation, interior views, and precise spatial mapping. Dr. Hoganson called this a “difference maker” and “totally transformational” for surgeons.  Dr. Hoganson’s team uses this technology to perform a range of complex cardiovascular repairs, such as reconstructing aortic and mitral valves, closing ventricular septal defects, and augmenting blood vessels, including pulmonary arteries and aortas. Materialise Mimics’ value is not limited to preoperative preparation. It also guides surgical procedures. During operations, clinicians can interact with the models using repurposed gaming controllers, allowing them to explore and isolate anatomical features in the operating theater.  One key breakthrough has been identifying and mapping the heart’s electrical system, which governs its rhythm. By integrating 3D modelling into intraoperative planning, surgeons have significantly reduced the risk of heart block, where electrical signals are delayed as they pass through the organ. With the help of Mimics software, incidence rates have fallen from 40% to as low as 5% in some cases.      Given the advantages of digital modelling, surgeons might be tempted to sideline physical 3D printing altogether. However, Dr. Hoganson insists additive manufacturing remains vital to refining surgical workflows. His team conducts a “tremendous amount of 3D printing,” creating patient-specific anatomical models, mostly with a resin-based Formlabs system. These models allow clinicians to test and validate plans in the lab before donning their scrubs. Boston Children’s has sharpened its surgical edge by using materials that closely replicate the mechanical properties of target tissues. This allows the team to 3D print anatomical models tailored to each child’s size, age, and physical makeup. For instance, Dr. Hoganson’s team can fabricate neonatal-sized aortas and pulmonary arteries that replicate the texture and elasticity of an infant’s vessels. Developed over several years, this approach enables accurate simulation of complex procedures, such as patch enlargement of pulmonary arteries. The team conducts rigorous preclinical testing by combining anatomical precision with lifelike tissue mechanics.  Dr. Hoganson explained that in-depth testing is crucial for refining techniques, reducing surgical risk, and minimizing complications in pediatric patients. This, in turn, slashes healthcare costs as fewer children spend extended time in the ICU following procedures. 3D planning and simulation empower surgeons to “do things right the first time, so we can reduce those reinterventions and complications,” Dr. Hoganson added.         Dr. David Hoganson demonstrating cardiovascular 3D models at the Materialise 3D Printing in Hospitals Forum 2025. Photo by 3D Printing Industry. Overcoming challenges to adoption in hospitals    What key challenges are limiting clinical adoption of 3D technology? For Dr. Hoganson, cost remains a critical barrier. “Having the efforts reimbursed will be a very important piece of this,” he explained. “That enables teams to grow and have the manpower to do it,” when 3D planning is clinically necessary. In the US, medical reimbursement involves a long path to approval. But progress is being made. His team has started billing successfully for some aspects of the work, marking an “encouraging start” toward broader systemic change. Adoption also hinges on easier integration into existing workflows. Dr. Hoganson noted that if 3D technology adds efforts and time to procedures, it won’t be chosen over existing methods. Therefore, “the more streamlined you can make the whole process for the physician, the more likely they are to adopt it.”  In response to these demands, Boston Children’s 3D Modelling and Simulation Program has designed a system that feels familiar to surgeons. “It’s not just about providing the technical aspects of the 3D model,” added Dr. Hoganson. “It’s about integrating the whole process into the clinical workflow in a way that works for the clinician.”  His team works at the center of these efforts, ensuring “there’s almost no barrier of entry to find and use the model they need.” Dr. Hoganson claims to have simplified the process to the stage where it looks and feels like regular medical care, removing the mystique and misconceptions around 3D technology. “There’s nothing special about it anymore,” he added. “That’s been a huge step towards this technology being a part of routine medical care.”   Boston Children’s integration strategy is working. The team expects to use 3D models in around 60% of heart surgeries this year. However, making 3D technology a standard of care has not been easy. Dr. Horganson said, “It has taken a very diligent effort to remove those barriers.”  In the broader tech space, 3D printing has sometimes suffered from overpromising and underdelivering, a pattern Dr. David Hoganson is keen to avoid. “We’ve tried to be extremely transparent with what is and is not being delivered,” he added. That clarity is crucial for building trust. A 3D model alone, for instance, serves a vital but defined role: enhanced visualization and preoperative measurements. Hoganson emphasized that 3D printing is not a miracle cure, but another tool in a surgeon’s toolbox.  For Boston Children’s, the future of 3D printing in healthcare lies beyond static models. Dr. Horganson believes additive manufacturing will be a basis for “taking the next step and impacting how surgery is conducted, and how precisely and perfectly it’s done the first time.” Over the next eighteen months, Dr. Hoganson’s team will double down on demonstrating how preoperative 3D modeling translates into better surgical procedures. This will include measuring outcomes from surgeries using 3D technology and assessing whether predictions have matched surgical results. He believes validating outcomes will be an “important step forward” in moving 3D modeling from supportive technology to an indispensable clinical standard. The number of patient-specific digital 3D models created annually at Boston Children’s Hospital’s Benderson Family Heart Center since 2018. Photo by 3D Printing Industry. Take the 3DPI Reader Survey – shape the future of AM reporting in under 5 minutes. Read all the 3D printing news from RAPID + TCT 2025 Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows Dr. David Hoganson speaking at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. #interview #printing #boston #childrens #hospital
    3DPRINTINGINDUSTRY.COM
    [INTERVIEW] 3D Printing at Boston Children’s Hospital: Engineering the Future of Pediatric Surgery
    Inside Boston Children’s Hospital, 3D printing and digital planning are “transforming” pediatric care. The 156-year-old institution uses Materialise’s Mimics software to turn two-dimensional patient scans into detailed 3D models, streamlining preoperative planning and enhancing surgical outcomes.    Dr. David Hoganson, a pediatric cardiac surgeon at the Massachusetts-based health center, called 3D technology a “total game-changer” for clinicians. Speaking at the Materialise 3D Printing in Hospitals Forum 2025, he outlined his role in leading the hospital’s Cardiovascular 3D Modelling and Simulation Program. Mimics has become part of routine care at Boston Children’s Benderson Family Heart Center. Since 2018, the Program’s engineers and clinicians have created over 1,600 patient-specific 3D models. Last year alone, the team made 483 models, which accounted for about 50% of its Operating Room surgical cases.     During Materialise’s healthcare forum, I spoke with Dr. Hoganson about his unique path from biomedical engineering to clinical practice. The Temple University graduate outlined how 3D modeling is no longer a futuristic add-on but an essential tool transforming the precision and planning of modern surgery. He revealed the tangible benefits of Mimics modelling versus traditional medical imaging, emphasizing how intraoperative 3D planning can reduce heart surgery complications by up to 87%. Looking to the future of healthcare, Dr. Hoganson discussed the need for more seamless clinical integration, validation, and financial reimbursement to increase adoption.    Dr. David Hoganson speaking at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. From the factory floor to the operating theater When Dr. Hoganson began his career, it wasn’t in an operating room but on the factory floor. He started as a biomedical engineer, developing cardiovascular medical devices for two years, before transitioning to medicine.  This pedigree has been instrumental in shaping Boston Children’s 3D Modeling and Simulation Program, which was co-founded by University of New Hampshire Mechanical Engineering graduate Dr. Peter Hammer. The team has grown to include 17 engineers and one clinical nurse. “It has been an engineering-focused effort from the beginning,” explained Dr. Hoganson. He emphasized that the team prioritizes using “advanced engineering analysis” to plan and conduct ultra-precise operations.  Dr. Hoganson believes this engineering focus challenges the structured nature of clinical medicine. “The mindset of medicine is much more focused on doing things the way we were taught,” he explains. In contrast, engineering embraces constant iteration, creating space for innovation and rethinking established practices. He argued that engineers are not “held back by the way medicine has always been done,” which makes them an invaluable asset in clinical settings. When engineers deeply understand clinical challenges and apply their analytical skills, they often uncover solutions that physicians may not have considered, he added. These range from optimized surgical workflows to entirely new approaches to preoperative planning. For Dr. Hoganson, the “secret sauce” lies in collaboration and ensuring “zero distance between the engineers and the problem.” Dr. David Hoganson speaking at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. 3D printing and digital planning enhance surgical outcomes  In pediatric cardiac surgery, speed matters. According to Dr. Hoganson, this is why digital 3D modeling takes priority in pre-operative planning and intraoperative guidance. Materialise’s Mimics software streamlines this process. Users can import CT and MRI data, which is automatically transformed into detailed, interactive 3D models. Surgeons can then run simulations and apply computational fluid dynamics to forecast the most effective treatment strategies. Boston Children’s 3D simulation lead described these capabilities as offering “tremendous benefits” beyond what traditional imaging alone can provide. Traditional scans are viewed in stacks of two-dimensional slices. Whereas, Mimics 3D models offer virtual segmentation, interior views, and precise spatial mapping. Dr. Hoganson called this a “difference maker” and “totally transformational” for surgeons.  Dr. Hoganson’s team uses this technology to perform a range of complex cardiovascular repairs, such as reconstructing aortic and mitral valves, closing ventricular septal defects (VSDs), and augmenting blood vessels, including pulmonary arteries and aortas. Materialise Mimics’ value is not limited to preoperative preparation. It also guides surgical procedures. During operations, clinicians can interact with the models using repurposed gaming controllers, allowing them to explore and isolate anatomical features in the operating theater.  One key breakthrough has been identifying and mapping the heart’s electrical system, which governs its rhythm. By integrating 3D modelling into intraoperative planning, surgeons have significantly reduced the risk of heart block, where electrical signals are delayed as they pass through the organ. With the help of Mimics software, incidence rates have fallen from 40% to as low as 5% in some cases.      Given the advantages of digital modelling, surgeons might be tempted to sideline physical 3D printing altogether. However, Dr. Hoganson insists additive manufacturing remains vital to refining surgical workflows. His team conducts a “tremendous amount of 3D printing,” creating patient-specific anatomical models, mostly with a resin-based Formlabs system. These models allow clinicians to test and validate plans in the lab before donning their scrubs. Boston Children’s has sharpened its surgical edge by using materials that closely replicate the mechanical properties of target tissues. This allows the team to 3D print anatomical models tailored to each child’s size, age, and physical makeup. For instance, Dr. Hoganson’s team can fabricate neonatal-sized aortas and pulmonary arteries that replicate the texture and elasticity of an infant’s vessels. Developed over several years, this approach enables accurate simulation of complex procedures, such as patch enlargement of pulmonary arteries. The team conducts rigorous preclinical testing by combining anatomical precision with lifelike tissue mechanics.  Dr. Hoganson explained that in-depth testing is crucial for refining techniques, reducing surgical risk, and minimizing complications in pediatric patients. This, in turn, slashes healthcare costs as fewer children spend extended time in the ICU following procedures. 3D planning and simulation empower surgeons to “do things right the first time, so we can reduce those reinterventions and complications,” Dr. Hoganson added.         Dr. David Hoganson demonstrating cardiovascular 3D models at the Materialise 3D Printing in Hospitals Forum 2025. Photo by 3D Printing Industry. Overcoming challenges to adoption in hospitals    What key challenges are limiting clinical adoption of 3D technology? For Dr. Hoganson, cost remains a critical barrier. “Having the efforts reimbursed will be a very important piece of this,” he explained. “That enables teams to grow and have the manpower to do it,” when 3D planning is clinically necessary. In the US, medical reimbursement involves a long path to approval. But progress is being made. His team has started billing successfully for some aspects of the work, marking an “encouraging start” toward broader systemic change. Adoption also hinges on easier integration into existing workflows. Dr. Hoganson noted that if 3D technology adds efforts and time to procedures, it won’t be chosen over existing methods. Therefore, “the more streamlined you can make the whole process for the physician, the more likely they are to adopt it.”  In response to these demands, Boston Children’s 3D Modelling and Simulation Program has designed a system that feels familiar to surgeons. “It’s not just about providing the technical aspects of the 3D model,” added Dr. Hoganson. “It’s about integrating the whole process into the clinical workflow in a way that works for the clinician.”  His team works at the center of these efforts, ensuring “there’s almost no barrier of entry to find and use the model they need.” Dr. Hoganson claims to have simplified the process to the stage where it looks and feels like regular medical care, removing the mystique and misconceptions around 3D technology. “There’s nothing special about it anymore,” he added. “That’s been a huge step towards this technology being a part of routine medical care.”   Boston Children’s integration strategy is working. The team expects to use 3D models in around 60% of heart surgeries this year. However, making 3D technology a standard of care has not been easy. Dr. Horganson said, “It has taken a very diligent effort to remove those barriers.”  In the broader tech space, 3D printing has sometimes suffered from overpromising and underdelivering, a pattern Dr. David Hoganson is keen to avoid. “We’ve tried to be extremely transparent with what is and is not being delivered,” he added. That clarity is crucial for building trust. A 3D model alone, for instance, serves a vital but defined role: enhanced visualization and preoperative measurements. Hoganson emphasized that 3D printing is not a miracle cure, but another tool in a surgeon’s toolbox.  For Boston Children’s, the future of 3D printing in healthcare lies beyond static models. Dr. Horganson believes additive manufacturing will be a basis for “taking the next step and impacting how surgery is conducted, and how precisely and perfectly it’s done the first time.” Over the next eighteen months, Dr. Hoganson’s team will double down on demonstrating how preoperative 3D modeling translates into better surgical procedures. This will include measuring outcomes from surgeries using 3D technology and assessing whether predictions have matched surgical results. He believes validating outcomes will be an “important step forward” in moving 3D modeling from supportive technology to an indispensable clinical standard. The number of patient-specific digital 3D models created annually at Boston Children’s Hospital’s Benderson Family Heart Center since 2018. Photo by 3D Printing Industry. Take the 3DPI Reader Survey – shape the future of AM reporting in under 5 minutes. Read all the 3D printing news from RAPID + TCT 2025 Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows Dr. David Hoganson speaking at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise.
    0 Commentarii 0 Distribuiri
  • Mailbox: Switch 2 Innovations, Localisation Dreams, "That Big Playtest Thing" - Nintendo Life Letters

    Image: Nintendo LifeAssuming the postie doesn't deprive us on launch day, we're less than three weeks away from having a brand new Nintendo console in our hands! This 24th edition of the Nintendo Life Mailbox will be the final spread with the original Switch as Nintendo's flagship console!
    Yes, we've been rifling through our inbox and publishing select contents in our monthly letters page for a whole two years now, and we'll be back next month with an entirely different console sitting on the desk. And, presumably, a whole pile of Switch 2-related correspondence to sort through.
    It's a busy time in Nintendo land, so let's crack on. Got something you want to get off your chest? We're ready and waiting to read about your game-related ponderings.
    Each month we’ll highlight a Star Letter, the writer of which will receive a month’s subscription to our ad-free Supporter scheme. Check out the submission guidelines at the bottom of this page.
    Let's sit back with a warm beverage and go through our dispatch box...
    Nintendo Life Mailbox - May 2025
    "Smells like...electronic...raspberries." — Image: Zion Grassl / Nintendo Life
    "for those who want it"What are your thoughts on the innovations of Switch 2 in relation to generations past? Nintendo's gimmicks have historically proven to be hit and miss, but I see a shift ever since the Switch launched in 2017. Sure, it still had the IR sensor but hybrid gaming and gyro aimingwere legitimate forward progressions, and Switch 2 seems to offer even more legitimate innovation with both gameshare and mouse mode. I foresee both of these features becoming permanent fixtures in future consoles, and much like gyro aiming, changing the way we engage with video games.
    I truly believe mouse mode is something gamers will never want to give up once they've experienced it's benefits. Thoughts?
    JaxonH
    More and different ways to play are exciting, especially if they're built into the hardware. For me, Drag x Drive was one of the highlights of the Switch 2 event at the start of April. I'm all for mouse mode options, whether they're cute and complementary or integral to a game, but it's on Nintendo to make it indispensable. The tech is certainly cheap enough to integrate into any Joy-Con-style controller from now on.
    I'm not convinced it's going to fundamentally change how we approach gaming, though. Will Mouse Mode be comfortable for multiple hours? They work like mice, sure, but ergonomically, that's not their main function. I also remember thinking every pad would have a built-in pointer after the Wiimote. It's affordable, intuitive tech, so why not include that option? And yet we live in a world where the Xbox pad still doesn't have gyro. - Ed.
    "out of the running"

    I recently finished Ace Attorney Investigations 2 on Switch, and wow, what a ride! I’m so happy it finally got localized for the West, even if it did take 13 years.
    Got me thinking about other games that haven’t seen the light of day outside Japan. Yes, there are some obvious frontrunners like Mother 3 and Dragon Quest 10 that I’m sure we all would love to see get localized, but I’m talking more about lesser-known gems. As a big fan of adventure games, that new Tokimeki Memorial remaster caught my eye, but unfortunately I’m no Japanese speaker at this stage, so I guess that puts me out of the running for now. I’m also still waiting for Yo-kai Watch 4, as well as my pipe dream Starfy 1-4 localizations…

    How about you all? Any Japan-exclusives you’ve been eager to see come West?
    BandeeOfTheStars

    I wrote a little feature back in 2020 on the subject, and looking at it now, I'd probably take Marvelous above anything else at this stage. I've never gotten around to playing the fan patch. Team? - Ed.

    "I'd love to see Square's last Super Famicom game, Treasure of the Rudras, get a rerelease. I've heard so many good things about the creativity of this one, but at this point, given the complicated magic system, this might be an impossible dream." - Alana
    "I’ll go for Fire Emblem: New Mystery of the Emblem. We’ve missed out on a lot of Fire Emblem games in the West, but this 2010 DS remake not coming our way always stings the most. The series is big enough now that surely Intelligent Systems could get away with releasing a bundle, right?" - Jim

    Image: Zion Grassl / Nintendo Life

    Nintendo games that never officially left Japan

    Forever with you; never with us

    "special kudos"

    Is Nintendo Life planning to do full/short reviews to Switch 2 Edition games? It would be great to decide which upgrades are worth it.

    Congratulations to all on the Switch 2 news coverage with special kudos to Alex, Felix and Zion! Love to watch their discussion videos.
    RenanKJ

    Thank you, Renan - I've passed on the message to the video chaps! One sent me a teary-eyed smiling emoji as a reply.
    Yes, we will be reviewing all the major NS2 Editions, with the upgrades and additions being the main focus. Whether they'll be full or mini will be a case-by-case thing. - Ed.
    "mildly disappointed"

    Hi,

    I am mildly disappointed in the game chat feature for Switch 2 because I normally play games alone, with friends in person, or on the NintendoLife discord; Plus, it records you. So, I'll probably never use it. But, since they're bringing back Download Play, which is my favorite feature, they should bring back another DS feature PictoChat.
    OswaldTheLuckyGamer
    It's a strange one - ever since writing a little thing about GameChat, the idea has been slowly growing on me, despite news that it's hogging system resources and, you know, it's just video chat. We'll see how it works, but I do have plans to conduct NL staff meetings over GameChat.

    Moderating anatomically exaggerated doodles is something Nintendo never wants to do again, so I don't see Pictochat returning. But StreetPass is theDS feature we really need, Oswald. StreetPass! - Ed.
    Image: Zion Grassl / Nintendo Life
    "that big playtest thing"

    With all the leaks and news of lawsuits coming out I just thought about one thing: Whatever happened to that big playtest thing everyone had to be quiet about or face the Ninty ninja's? I actually never saw footage of it, so I'm guessing it never truly leaked?
    What do you guys think about it now that we have the new system in our paws soon? What was it made for, did we already see that or is something still in the works?
    garfreek

    Testing it on Switch 1 suggests it will be a cross-platform thing, whatever it ends up being. There was nothing in it that suggested the power of Switch 2 was needed to realise its ultimate potential.
    It'll be coming. Whether anybody's really bothered is the bigger question. - Ed.

    Bonus Letters

    "Regarding a potential Switch 2 Lite, Nintendo has not officially announced such a version." - Trinny G

    This is true. Aside: If you're in the market for a brand-new Switch Lite, now's the time to be looking for a bargain! - Ed.

    "I think we’re all missing the true potential of the Switch 2 microphone: Odama 2" - Munchlax

    Pff. It's panpipe time. - Ed.
    Image: Nintendo Life

    That's all for this month! Thanks to everyone who wrote in, whether you were featured above or not.
    Got something you'd like to get off your chest? A burning question you need answered? A correction you can't contain? Follow the instructions below, then, and we look forward to rifling through your missives.
    Nintendo Life Mailbox submission advice and guidelines

    Letters, not essays, please - Bear in mind that your letter may appear on the site, and 1000 words ruminating on the Legend of Heroes series and asking Alana for her personal ranking isn't likely to make the cut. Short and sweet is the order of the day.Don't go crazy with multiple correspondences - Ideally, just the one letter a month, please!
    Don't be disheartened if your letter doesn't appear in the monthly article - We anticipate a substantial inbox, and we'll only be able to highlight a handful every month. So if your particular letter isn't chosen for the article, please don't get disheartened!

    How to send a Letter to the Nintendo Life Mailbox

    Head to Nintendo Life's Contact page and select the subject "Reader Letters" from the drop-down menu. Type your name, email, and beautifully crafted letter into the appropriate box, hit send, and boom — you're done!

    Advert Free

    Share:0
    1

    Gavin first wrote for Nintendo Life in 2018 before joining the site full-time the following year, rising through the ranks to become Editor. He can currently be found squashed beneath a Switch backlog the size of Normandy.

    Hold on there, you need to login to post a comment...
    #mailbox #switch #innovations #localisation #dreams
    Mailbox: Switch 2 Innovations, Localisation Dreams, "That Big Playtest Thing" - Nintendo Life Letters
    Image: Nintendo LifeAssuming the postie doesn't deprive us on launch day, we're less than three weeks away from having a brand new Nintendo console in our hands! This 24th edition of the Nintendo Life Mailbox will be the final spread with the original Switch as Nintendo's flagship console! Yes, we've been rifling through our inbox and publishing select contents in our monthly letters page for a whole two years now, and we'll be back next month with an entirely different console sitting on the desk. And, presumably, a whole pile of Switch 2-related correspondence to sort through. It's a busy time in Nintendo land, so let's crack on. Got something you want to get off your chest? We're ready and waiting to read about your game-related ponderings. Each month we’ll highlight a Star Letter, the writer of which will receive a month’s subscription to our ad-free Supporter scheme. Check out the submission guidelines at the bottom of this page. Let's sit back with a warm beverage and go through our dispatch box... Nintendo Life Mailbox - May 2025 "Smells like...electronic...raspberries." — Image: Zion Grassl / Nintendo Life "for those who want it"What are your thoughts on the innovations of Switch 2 in relation to generations past? Nintendo's gimmicks have historically proven to be hit and miss, but I see a shift ever since the Switch launched in 2017. Sure, it still had the IR sensor but hybrid gaming and gyro aimingwere legitimate forward progressions, and Switch 2 seems to offer even more legitimate innovation with both gameshare and mouse mode. I foresee both of these features becoming permanent fixtures in future consoles, and much like gyro aiming, changing the way we engage with video games. I truly believe mouse mode is something gamers will never want to give up once they've experienced it's benefits. Thoughts? JaxonH More and different ways to play are exciting, especially if they're built into the hardware. For me, Drag x Drive was one of the highlights of the Switch 2 event at the start of April. I'm all for mouse mode options, whether they're cute and complementary or integral to a game, but it's on Nintendo to make it indispensable. The tech is certainly cheap enough to integrate into any Joy-Con-style controller from now on. I'm not convinced it's going to fundamentally change how we approach gaming, though. Will Mouse Mode be comfortable for multiple hours? They work like mice, sure, but ergonomically, that's not their main function. I also remember thinking every pad would have a built-in pointer after the Wiimote. It's affordable, intuitive tech, so why not include that option? And yet we live in a world where the Xbox pad still doesn't have gyro. - Ed. "out of the running" I recently finished Ace Attorney Investigations 2 on Switch, and wow, what a ride! I’m so happy it finally got localized for the West, even if it did take 13 years. Got me thinking about other games that haven’t seen the light of day outside Japan. Yes, there are some obvious frontrunners like Mother 3 and Dragon Quest 10 that I’m sure we all would love to see get localized, but I’m talking more about lesser-known gems. As a big fan of adventure games, that new Tokimeki Memorial remaster caught my eye, but unfortunately I’m no Japanese speaker at this stage, so I guess that puts me out of the running for now. I’m also still waiting for Yo-kai Watch 4, as well as my pipe dream Starfy 1-4 localizations… How about you all? Any Japan-exclusives you’ve been eager to see come West? BandeeOfTheStars I wrote a little feature back in 2020 on the subject, and looking at it now, I'd probably take Marvelous above anything else at this stage. I've never gotten around to playing the fan patch. Team? - Ed. "I'd love to see Square's last Super Famicom game, Treasure of the Rudras, get a rerelease. I've heard so many good things about the creativity of this one, but at this point, given the complicated magic system, this might be an impossible dream." - Alana "I’ll go for Fire Emblem: New Mystery of the Emblem. We’ve missed out on a lot of Fire Emblem games in the West, but this 2010 DS remake not coming our way always stings the most. The series is big enough now that surely Intelligent Systems could get away with releasing a bundle, right?" - Jim Image: Zion Grassl / Nintendo Life Nintendo games that never officially left Japan Forever with you; never with us "special kudos" Is Nintendo Life planning to do full/short reviews to Switch 2 Edition games? It would be great to decide which upgrades are worth it. Congratulations to all on the Switch 2 news coverage with special kudos to Alex, Felix and Zion! Love to watch their discussion videos. RenanKJ Thank you, Renan - I've passed on the message to the video chaps! One sent me a teary-eyed smiling emoji as a reply. Yes, we will be reviewing all the major NS2 Editions, with the upgrades and additions being the main focus. Whether they'll be full or mini will be a case-by-case thing. - Ed. "mildly disappointed" Hi, I am mildly disappointed in the game chat feature for Switch 2 because I normally play games alone, with friends in person, or on the NintendoLife discord; Plus, it records you. So, I'll probably never use it. But, since they're bringing back Download Play, which is my favorite feature, they should bring back another DS feature PictoChat. OswaldTheLuckyGamer It's a strange one - ever since writing a little thing about GameChat, the idea has been slowly growing on me, despite news that it's hogging system resources and, you know, it's just video chat. We'll see how it works, but I do have plans to conduct NL staff meetings over GameChat. Moderating anatomically exaggerated doodles is something Nintendo never wants to do again, so I don't see Pictochat returning. But StreetPass is theDS feature we really need, Oswald. StreetPass! - Ed. Image: Zion Grassl / Nintendo Life "that big playtest thing" With all the leaks and news of lawsuits coming out I just thought about one thing: Whatever happened to that big playtest thing everyone had to be quiet about or face the Ninty ninja's? I actually never saw footage of it, so I'm guessing it never truly leaked? What do you guys think about it now that we have the new system in our paws soon? What was it made for, did we already see that or is something still in the works? garfreek Testing it on Switch 1 suggests it will be a cross-platform thing, whatever it ends up being. There was nothing in it that suggested the power of Switch 2 was needed to realise its ultimate potential. It'll be coming. Whether anybody's really bothered is the bigger question. - Ed. Bonus Letters "Regarding a potential Switch 2 Lite, Nintendo has not officially announced such a version." - Trinny G This is true. Aside: If you're in the market for a brand-new Switch Lite, now's the time to be looking for a bargain! - Ed. "I think we’re all missing the true potential of the Switch 2 microphone: Odama 2" - Munchlax Pff. It's panpipe time. - Ed. Image: Nintendo Life That's all for this month! Thanks to everyone who wrote in, whether you were featured above or not. Got something you'd like to get off your chest? A burning question you need answered? A correction you can't contain? Follow the instructions below, then, and we look forward to rifling through your missives. Nintendo Life Mailbox submission advice and guidelines Letters, not essays, please - Bear in mind that your letter may appear on the site, and 1000 words ruminating on the Legend of Heroes series and asking Alana for her personal ranking isn't likely to make the cut. Short and sweet is the order of the day.Don't go crazy with multiple correspondences - Ideally, just the one letter a month, please! Don't be disheartened if your letter doesn't appear in the monthly article - We anticipate a substantial inbox, and we'll only be able to highlight a handful every month. So if your particular letter isn't chosen for the article, please don't get disheartened! How to send a Letter to the Nintendo Life Mailbox Head to Nintendo Life's Contact page and select the subject "Reader Letters" from the drop-down menu. Type your name, email, and beautifully crafted letter into the appropriate box, hit send, and boom — you're done! Advert Free Share:0 1 Gavin first wrote for Nintendo Life in 2018 before joining the site full-time the following year, rising through the ranks to become Editor. He can currently be found squashed beneath a Switch backlog the size of Normandy. Hold on there, you need to login to post a comment... #mailbox #switch #innovations #localisation #dreams
    WWW.NINTENDOLIFE.COM
    Mailbox: Switch 2 Innovations, Localisation Dreams, "That Big Playtest Thing" - Nintendo Life Letters
    Image: Nintendo LifeAssuming the postie doesn't deprive us on launch day, we're less than three weeks away from having a brand new Nintendo console in our hands! This 24th edition of the Nintendo Life Mailbox will be the final spread with the original Switch as Nintendo's flagship console! Yes, we've been rifling through our inbox and publishing select contents in our monthly letters page for a whole two years now, and we'll be back next month with an entirely different console sitting on the desk. And, presumably, a whole pile of Switch 2-related correspondence to sort through. It's a busy time in Nintendo land, so let's crack on. Got something you want to get off your chest? We're ready and waiting to read about your game-related ponderings. Each month we’ll highlight a Star Letter, the writer of which will receive a month’s subscription to our ad-free Supporter scheme. Check out the submission guidelines at the bottom of this page. Let's sit back with a warm beverage and go through our dispatch box... Nintendo Life Mailbox - May 2025 "Smells like...electronic...raspberries." — Image: Zion Grassl / Nintendo Life "for those who want it" (***STAR LETTER***) What are your thoughts on the innovations of Switch 2 in relation to generations past? Nintendo's gimmicks have historically proven to be hit and miss, but I see a shift ever since the Switch launched in 2017. Sure, it still had the IR sensor but hybrid gaming and gyro aiming (which technically already existed, but Switch established it as a mainstay) were legitimate forward progressions, and Switch 2 seems to offer even more legitimate innovation with both gameshare and mouse mode. I foresee both of these features becoming permanent fixtures in future consoles, and much like gyro aiming, changing the way we engage with video games. I truly believe mouse mode is something gamers will never want to give up once they've experienced it's benefits (especially since, unlike innovations of yesteryear, it doesn't replace the traditional experience, it merely enhances it as an option for those who want it). Thoughts? JaxonH More and different ways to play are exciting, especially if they're built into the hardware. For me, Drag x Drive was one of the highlights of the Switch 2 event at the start of April. I'm all for mouse mode options, whether they're cute and complementary or integral to a game, but it's on Nintendo to make it indispensable. The tech is certainly cheap enough to integrate into any Joy-Con-style controller from now on. I'm not convinced it's going to fundamentally change how we approach gaming, though. Will Mouse Mode be comfortable for multiple hours? They work like mice, sure, but ergonomically, that's not their main function. I also remember thinking every pad would have a built-in pointer after the Wiimote. It's affordable, intuitive tech, so why not include that option? And yet we live in a world where the Xbox pad still doesn't have gyro. - Ed. "out of the running" I recently finished Ace Attorney Investigations 2 on Switch, and wow, what a ride! I’m so happy it finally got localized for the West, even if it did take 13 years. Got me thinking about other games that haven’t seen the light of day outside Japan. Yes, there are some obvious frontrunners like Mother 3 and Dragon Quest 10 that I’m sure we all would love to see get localized, but I’m talking more about lesser-known gems. As a big fan of adventure games, that new Tokimeki Memorial remaster caught my eye, but unfortunately I’m no Japanese speaker at this stage, so I guess that puts me out of the running for now. I’m also still waiting for Yo-kai Watch 4, as well as my pipe dream Starfy 1-4 localizations… How about you all? Any Japan-exclusives you’ve been eager to see come West? BandeeOfTheStars I wrote a little feature back in 2020 on the subject, and looking at it now, I'd probably take Marvelous above anything else at this stage. I've never gotten around to playing the fan patch. Team? - Ed. "I'd love to see Square's last Super Famicom game, Treasure of the Rudras, get a rerelease. I've heard so many good things about the creativity of this one, but at this point, given the complicated magic system, this might be an impossible dream." - Alana "I’ll go for Fire Emblem: New Mystery of the Emblem. We’ve missed out on a lot of Fire Emblem games in the West, but this 2010 DS remake not coming our way always stings the most. The series is big enough now that surely Intelligent Systems could get away with releasing a bundle, right?" - Jim Image: Zion Grassl / Nintendo Life Nintendo games that never officially left Japan Forever with you; never with us "special kudos" Is Nintendo Life planning to do full/short reviews to Switch 2 Edition games? It would be great to decide which upgrades are worth it. Congratulations to all on the Switch 2 news coverage with special kudos to Alex, Felix and Zion! Love to watch their discussion videos. RenanKJ Thank you, Renan - I've passed on the message to the video chaps! One sent me a teary-eyed smiling emoji as a reply. Yes, we will be reviewing all the major NS2 Editions, with the upgrades and additions being the main focus (the original Switch reviews will still be there for reference). Whether they'll be full or mini will be a case-by-case thing. - Ed. "mildly disappointed" Hi, I am mildly disappointed in the game chat feature for Switch 2 because I normally play games alone, with friends in person, or on the NintendoLife discord; Plus, it records you. So, I'll probably never use it. But, since they're bringing back Download Play, which is my favorite feature, they should bring back another DS feature PictoChat. OswaldTheLuckyGamer It's a strange one - ever since writing a little thing about GameChat, the idea has been slowly growing on me, despite news that it's hogging system resources and, you know, it's just video chat. We'll see how it works, but I do have plans to conduct NL staff meetings over GameChat. Moderating anatomically exaggerated doodles is something Nintendo never wants to do again, so I don't see Pictochat returning. But StreetPass is the (3)DS feature we really need, Oswald. StreetPass! - Ed. Image: Zion Grassl / Nintendo Life "that big playtest thing" With all the leaks and news of lawsuits coming out I just thought about one thing: Whatever happened to that big playtest thing everyone had to be quiet about or face the Ninty ninja's? I actually never saw footage of it, so I'm guessing it never truly leaked? What do you guys think about it now that we have the new system in our paws soon? What was it made for, did we already see that or is something still in the works? garfreek Testing it on Switch 1 suggests it will be a cross-platform thing, whatever it ends up being. There was nothing in it that suggested the power of Switch 2 was needed to realise its ultimate potential. It'll be coming. Whether anybody's really bothered is the bigger question. - Ed. Bonus Letters "Regarding a potential Switch 2 Lite, Nintendo has not officially announced such a version." - Trinny G This is true. Aside: If you're in the market for a brand-new Switch Lite, now's the time to be looking for a bargain! - Ed. "I think we’re all missing the true potential of the Switch 2 microphone: Odama 2" - Munchlax Pff. It's panpipe time. - Ed. Image: Nintendo Life That's all for this month! Thanks to everyone who wrote in, whether you were featured above or not. Got something you'd like to get off your chest? A burning question you need answered? A correction you can't contain? Follow the instructions below, then, and we look forward to rifling through your missives. Nintendo Life Mailbox submission advice and guidelines Letters, not essays, please - Bear in mind that your letter may appear on the site, and 1000 words ruminating on the Legend of Heroes series and asking Alana for her personal ranking isn't likely to make the cut. Short and sweet is the order of the day. (If you're after a general guide, 100-200 words would be ample for most topics.) Don't go crazy with multiple correspondences - Ideally, just the one letter a month, please! Don't be disheartened if your letter doesn't appear in the monthly article - We anticipate a substantial inbox, and we'll only be able to highlight a handful every month. So if your particular letter isn't chosen for the article, please don't get disheartened! How to send a Letter to the Nintendo Life Mailbox Head to Nintendo Life's Contact page and select the subject "Reader Letters" from the drop-down menu (it's already done for you in the link above). Type your name, email, and beautifully crafted letter into the appropriate box, hit send, and boom — you're done! Advert Free Share:0 1 Gavin first wrote for Nintendo Life in 2018 before joining the site full-time the following year, rising through the ranks to become Editor. He can currently be found squashed beneath a Switch backlog the size of Normandy. Hold on there, you need to login to post a comment...
    0 Commentarii 0 Distribuiri
  • How AM Elevates Healthcare: Insights from the Materialise 3D Printing in Hospitals Forum 2025

    The cobbled streets and centuries-old university halls of Leuven recently served as a picturesque backdrop for the Materialise 3D Printing in Hospitals Forum 2025. Belgium’s Flemish Brabant capital hosted the annual meeting, which has become a key gathering for the medical 3D printing community since its launch in 2017.
    This year, 140 international healthcare professionals convened for two days of talks, workshops, and lively discussion on how Materialise’s software enhances patient care. The Forum’s opening day, hosted at Leuven’s historic Irish College, featured 16 presentations by 18 healthcare clinicians and medical 3D printing experts. 
    While often described as the future of medicine, personalized healthcare has already become routine in many clinical settings. Speakers emphasized that 3D printing is no longer merely a “cool” innovation, but an essential tool that improves patient outcomes. “Personalized treatment is not just a vision for the future,” said Koen Peters, Executive Vice President Medical at Materialise. “It’s a reality we’re building together every day.”
    During the forum, practitioners and clinical engineers demonstrated the critical role of Materialise’s software in medical workflows. Presentations highlighted value across a wide range of procedures, from brain tumour removal and organ transplantation to the separation of conjoined twins and maxillofacial implant surgeries. Several use cases demonstrated how 3D technology can reduce surgery times by up to four times, enhance patient recovery, and cut hospital costs by almost £6,000 per case.     
    140 visitors attended the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise.
    Digital simulation and 3D printing slash operating times 
    Headquartered a few miles outside Leuven’s medieval center, Materialise is a global leader in medical 3D printing and digital planning. Its Mimics software suite automatically converts CT and MRI scans into detailed 3D models. Clinicians use these tools to prepare for procedures, analyse anatomy, and create patient-specific models that enhance surgical planning.
    So far, Materialise software has supported more than 500,000 patients and analysed over 6 million medical scans. One case that generated notable interest among the Forum’s attendees was that of Lisa Ferrie and Jiten Parmar from Leeds General Infirmary. The pair worked alongside Asim Sheikh, a Consultant Skullbase and Neurovascular Neurosurgeon, to conduct the UK’s first “coach door osteotomy” on Ruvimbo Kaviya, a 40-year-old nurse from Leeds. 
    This novel keyhole surgery successfully removed a brain tumor from Kaviya’s cavernous sinus, a hard-to-reach area behind the eyes. Most surgeries of this kind require large incisions and the removal of substantial skull sections, resulting in extended recovery time and the risk of postoperative complications. Such an approach would have presented serious risks for removing Kaviya’s tumor, which “was in a complex area surrounded by a lot of nerves,” explained Parmar, a Consultant in Maxillofacial Surgery.   
    Instead, the Leeds-based team uses a minimally invasive technique that requires only a 1.5 cm incision near the side of Ravimbo’s eyelid. A small section of skull bone was then shifted sideways and backward, much like a coach door sliding open, to create an access point for tumor removal. Following the procedure, Ravimbo recovered in a matter of days and was left with only a 6 mm scar at the incision point. 
    Materialise software played a vital role in facilitating this novel procedure. Ferrie is a Biomedical Engineer and 3D Planning Service Lead at Leeds Teaching Hospitals NHS Trust. She used mimics to convert medical scans into digital 3D models of Ravimbo’s skull. This allowed her team to conduct “virtual surgical planning” and practice the procedure in three dimensions, “to see if it’s going to work as we expect.” 
    Ferrie also fabricated life-sized, polyjet 3D printed anatomical models of Ravimbo’s skull for more hands-on surgical preparation. Sheikh and Parmar used these models in the hospital’s cadaver lab to rehearse the procedure until they were confident of a successful outcome. This 3D printing-enabled approach has since been repeated for additional cases, unlocking a new standard of care for patients with previously inoperable brain tumors. 
    The impact of 3D planning is striking. Average operating times fell from 8-12 hours to just 2-3 hours, and average patient discharge times dropped from 7-10 days to 2-3 days. These efficiencies translated into cost savings of £1,780 to £5,758 per case, while additional surgical capacity generated an average of £11,226 in income per operating list.
    Jiten Parmarand Lisa Ferriepresenting at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise.
    Dr. Davide Curione also discussed the value of virtual planning and 3D printing for surgical procedures. Based at Bambino Gesù Pediatric Hospital in Rome, the radiologist’s team conducts 3D modeling, visualization, simulation, and 3D printing. 
    One case involved thoraco-omphalopagus twins joined at the chest and abdomen. Curione’s team 3D printed a multi-color anatomical model of the twins’ anatomy, which he called “the first of its kind for complexity in Italy.” Fabricated in transparent resin, the model offered a detailed view of the twins’ internal anatomy, including the rib cage, lungs, and cardiovascular system.
    Attention then turned to the liver. The team built a digital reconstruction to simulate the optimal resection planes for the general separation and the hepatic splitting procedure. This was followed by a second multi-colour 3D printed model highlighting the organ’s vascularisation. These resources improved surgical planning, cutting operating time by 30%, and enabled a successful separation, with no major complications reported two years post-operation.
    Dr. Davide Curione’s workflow for creating a 3D printed model of thoraco-omphalopagus twins using Mimics. Image via Frontiers in Physiology.
    VR-enabled surgery enhances organ transplants  
    Materialise’s Mimics software can also be used in extended reality, allowing clinicians to interact more intuitively with 3D anatomical models and medical images. By using off-the-shelf virtual realityand augmented realityheadsets, healthcare professionals can more closely examine complex structures in an immersive environment.
    Dr. David Sibřina is a Principal Researcher and Developer for the VRLab team at Prague’s Institute for Clinical and Experimental Medicine. He leads efforts to accelerate the clinical adoption of VR and AR in organ transplantation, surgical planning, and surgical guidance. 
    The former Forbes 30 Under 30 honouree explained that since 2016, IKEM’s 3D printing lab has focused on producing anatomical models to support liver and kidney donor programmes. His lab also fabricates 3D printed anatomical models of ventricles and aneurysms for clinical use. 
    However, Sibřina’s team recently became overwhelmed by high demand for physical models, with surgeons requesting additional 3D model processing options. This led Sibřina to create the IKEM VRLab, offering XR capabilities to help surgeons plan and conduct complex transplantation surgeries and resection procedures.     
    When turning to XR, Sibřina’s lab opted against adopting a ready-made software solution, instead developing its own from scratch. “The problem with some of the commercial solutions is capability and integration,” he explained. “The devices are incredibly difficult and expensive to integrate within medical systems, particularly in public hospitals.” He also pointed to user interface shortcomings and the lack of alignment with established medical protocols. 
    According to Sibřina, IKEM VRLab’s offering is a versatile and scalable VR system that is simple to use and customizable to different surgical disciplines. He described it as “Zoom for 3D planning,” enabling live virtual collaboration between medical professionals. It leverages joint CT and MRI acquisition models, developed with IKEM’s medical physicists and radiologists. Data from patient scans is converted into interactive digital reconstructions that can be leveraged for analysis and surgical planning. 
    IKEM VRLab also offers a virtual “Fitting Room,” which allows surgeons to assess whether a donor’s organ size matches the recipient’s body. A digital model is created for every deceased donor and live recipient’s body, enabling surgeons to perform the size allocation assessments. 
    Sibřina explained that this capability significantly reduces the number of recipients who would otherwise fail to be matched with a suitable donor. For example, 262 deceased liver donors have been processed for Fitting Room size allocations by IKEM VRLab. In 27 instances, the VR Fitting Room prevented potential recipients from being skipped in the waiting list based on standard biometrics, CT axis measurements, and BMI ratios.                         
    Overall, 941 patient-specific visualizations have been performed using Sibřina’s technology. 285were for liver recipients, 311for liver donors, and 299for liver resection. Living liver donors account for 59cases, and split/reduced donors for 21.          
    A forum attendee using Materialise’s Mimics software in augmented reality. Photo via Materialise.
    Personalized healthcare: 3D printing implants and surgical guides 
    Beyond surgical planning and 3D visualisation, Materialise Mimics software supports the design and production of patient-specific implants and surgical guides. The company conducts healthcare contract manufacturing at its Leuven HQ and medical 3D printing facility in Plymouth, Michigan. 
    Hospitals can design patient-specific medical devices in-house or collaborate with Materialise’s clinical engineers to develop custom components. Materialise then 3D prints these devices and ships them for clinical use. The Belgian company, headed by CEO Brigitte de Vet-Veithen, produces around 280,000 custom medical instruments each year, with 160,000 destined for the US market. These include personalised titanium cranio-maxillofacialimplants for facial reconstruction and colour-coded surgical guides.
    Poole Hospital’s 3D specialists, Sian Campbell and Poppy Taylor-Crawford, shared how their team has adopted Materialise software to support complex CMF surgeries. Since acquiring the platform in 2022, they have developed digital workflows for planning and 3D printing patient-specific implants and surgical guides in 14 cases, particularly for facial reconstruction. 
    Campbell and Taylor-Crawford begin their workflow by importing patient CT and MRI data into Materialise’s Mimics Enlight CMF software. Automated tools handle initial segmentation, tumour resection planning, and the creation of cutting planes. For more complex cases involving fibula or scapula grafts, the team adapts these workflows to ensure precise alignment and fit of the bone graft within the defect.
    Next, the surgical plan and anatomical data are transferred to Materialise 3-matic, where the team designs patient-specific resection guides, reconstruction plates, and implants. These designs are refined through close collaboration with surgeons, incorporating feedback to optimise geometry and fit. Virtual fit checks verify guide accuracy, while further analysis ensures compatibility with surgical instruments and operating constraints. Once validated, the guides and implants are 3D printed for surgery.
    According to Campbell and Taylor-Crawford, these custom devices enable more accurate resections and implant placements. This improves surgical alignment and reduces theatre time by minimising intraoperative adjustments.
    An example of the cranio-maxillofacial implants and surgical guides 3D printed by Materialise. Photo by 3D Printing Industry
    Custom 3D printed implants are also fabricated at the Rizzoli Orthopaedic Institute in Bologna, Italy. Originally established as a motion analysis lab, the institute has expanded its expertise into surgical planning, biomechanical analysis, and now, personalized 3D printed implant design.
    Dr. Alberto Leardini, Director of the Movement Analysis Laboratory, described his team’s patient-specific implant workflow. They combine CT and MRI scans to identify bone defects and tumour locations. Clinical engineers then use this data to build digital models and plan resections. They also design cutting guides and custom implants tailored to each patient’s anatomy.
    These designs are refined in collaboration with surgeons before being outsourced to manufacturing partners for production. Importantly, this workflow internalizes design and planning phases. By hosting engineering and clinical teams together on-site, they aim to streamline decision-making and reduce lead times. Once the digital design is finalised, only the additive manufacturing step is outsourced, ensuring “zero distance” collaboration between teams. 
    Dr. Leardini emphasised that this approach improves clinical outcomes and promises economic benefits. While custom implants require more imaging and upfront planning, they reduce time in the operating theatre, shorten hospital stays, and minimise patient transfers. 
    After a full day of presentations inside the Irish College’s eighteenth-century chapel, the consensus was clear. 3D technology is not a niche capability reserved for high-end procedures, but a valuable tool enhancing everyday care for thousands of patients globally. From faster surgeries to cost savings and personalized treatments, hospitals are increasingly embedding 3D technology into routine care. Materialise’s software sits at the heart of this shift, enabling clinicians to deliver safer, smarter, and more efficient healthcare. 
    Take the 3DPI Reader Survey – shape the future of AM reporting in under 5 minutes.
    Read all the 3D printing news from RAPID + TCT 2025
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows 3D printed anatomical models at Materialise HQ in Leuven. Photo by 3D Printing Industry.
    #how #elevates #healthcare #insights #materialise
    How AM Elevates Healthcare: Insights from the Materialise 3D Printing in Hospitals Forum 2025
    The cobbled streets and centuries-old university halls of Leuven recently served as a picturesque backdrop for the Materialise 3D Printing in Hospitals Forum 2025. Belgium’s Flemish Brabant capital hosted the annual meeting, which has become a key gathering for the medical 3D printing community since its launch in 2017. This year, 140 international healthcare professionals convened for two days of talks, workshops, and lively discussion on how Materialise’s software enhances patient care. The Forum’s opening day, hosted at Leuven’s historic Irish College, featured 16 presentations by 18 healthcare clinicians and medical 3D printing experts.  While often described as the future of medicine, personalized healthcare has already become routine in many clinical settings. Speakers emphasized that 3D printing is no longer merely a “cool” innovation, but an essential tool that improves patient outcomes. “Personalized treatment is not just a vision for the future,” said Koen Peters, Executive Vice President Medical at Materialise. “It’s a reality we’re building together every day.” During the forum, practitioners and clinical engineers demonstrated the critical role of Materialise’s software in medical workflows. Presentations highlighted value across a wide range of procedures, from brain tumour removal and organ transplantation to the separation of conjoined twins and maxillofacial implant surgeries. Several use cases demonstrated how 3D technology can reduce surgery times by up to four times, enhance patient recovery, and cut hospital costs by almost £6,000 per case.      140 visitors attended the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. Digital simulation and 3D printing slash operating times  Headquartered a few miles outside Leuven’s medieval center, Materialise is a global leader in medical 3D printing and digital planning. Its Mimics software suite automatically converts CT and MRI scans into detailed 3D models. Clinicians use these tools to prepare for procedures, analyse anatomy, and create patient-specific models that enhance surgical planning. So far, Materialise software has supported more than 500,000 patients and analysed over 6 million medical scans. One case that generated notable interest among the Forum’s attendees was that of Lisa Ferrie and Jiten Parmar from Leeds General Infirmary. The pair worked alongside Asim Sheikh, a Consultant Skullbase and Neurovascular Neurosurgeon, to conduct the UK’s first “coach door osteotomy” on Ruvimbo Kaviya, a 40-year-old nurse from Leeds.  This novel keyhole surgery successfully removed a brain tumor from Kaviya’s cavernous sinus, a hard-to-reach area behind the eyes. Most surgeries of this kind require large incisions and the removal of substantial skull sections, resulting in extended recovery time and the risk of postoperative complications. Such an approach would have presented serious risks for removing Kaviya’s tumor, which “was in a complex area surrounded by a lot of nerves,” explained Parmar, a Consultant in Maxillofacial Surgery.    Instead, the Leeds-based team uses a minimally invasive technique that requires only a 1.5 cm incision near the side of Ravimbo’s eyelid. A small section of skull bone was then shifted sideways and backward, much like a coach door sliding open, to create an access point for tumor removal. Following the procedure, Ravimbo recovered in a matter of days and was left with only a 6 mm scar at the incision point.  Materialise software played a vital role in facilitating this novel procedure. Ferrie is a Biomedical Engineer and 3D Planning Service Lead at Leeds Teaching Hospitals NHS Trust. She used mimics to convert medical scans into digital 3D models of Ravimbo’s skull. This allowed her team to conduct “virtual surgical planning” and practice the procedure in three dimensions, “to see if it’s going to work as we expect.”  Ferrie also fabricated life-sized, polyjet 3D printed anatomical models of Ravimbo’s skull for more hands-on surgical preparation. Sheikh and Parmar used these models in the hospital’s cadaver lab to rehearse the procedure until they were confident of a successful outcome. This 3D printing-enabled approach has since been repeated for additional cases, unlocking a new standard of care for patients with previously inoperable brain tumors.  The impact of 3D planning is striking. Average operating times fell from 8-12 hours to just 2-3 hours, and average patient discharge times dropped from 7-10 days to 2-3 days. These efficiencies translated into cost savings of £1,780 to £5,758 per case, while additional surgical capacity generated an average of £11,226 in income per operating list. Jiten Parmarand Lisa Ferriepresenting at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. Dr. Davide Curione also discussed the value of virtual planning and 3D printing for surgical procedures. Based at Bambino Gesù Pediatric Hospital in Rome, the radiologist’s team conducts 3D modeling, visualization, simulation, and 3D printing.  One case involved thoraco-omphalopagus twins joined at the chest and abdomen. Curione’s team 3D printed a multi-color anatomical model of the twins’ anatomy, which he called “the first of its kind for complexity in Italy.” Fabricated in transparent resin, the model offered a detailed view of the twins’ internal anatomy, including the rib cage, lungs, and cardiovascular system. Attention then turned to the liver. The team built a digital reconstruction to simulate the optimal resection planes for the general separation and the hepatic splitting procedure. This was followed by a second multi-colour 3D printed model highlighting the organ’s vascularisation. These resources improved surgical planning, cutting operating time by 30%, and enabled a successful separation, with no major complications reported two years post-operation. Dr. Davide Curione’s workflow for creating a 3D printed model of thoraco-omphalopagus twins using Mimics. Image via Frontiers in Physiology. VR-enabled surgery enhances organ transplants   Materialise’s Mimics software can also be used in extended reality, allowing clinicians to interact more intuitively with 3D anatomical models and medical images. By using off-the-shelf virtual realityand augmented realityheadsets, healthcare professionals can more closely examine complex structures in an immersive environment. Dr. David Sibřina is a Principal Researcher and Developer for the VRLab team at Prague’s Institute for Clinical and Experimental Medicine. He leads efforts to accelerate the clinical adoption of VR and AR in organ transplantation, surgical planning, and surgical guidance.  The former Forbes 30 Under 30 honouree explained that since 2016, IKEM’s 3D printing lab has focused on producing anatomical models to support liver and kidney donor programmes. His lab also fabricates 3D printed anatomical models of ventricles and aneurysms for clinical use.  However, Sibřina’s team recently became overwhelmed by high demand for physical models, with surgeons requesting additional 3D model processing options. This led Sibřina to create the IKEM VRLab, offering XR capabilities to help surgeons plan and conduct complex transplantation surgeries and resection procedures.      When turning to XR, Sibřina’s lab opted against adopting a ready-made software solution, instead developing its own from scratch. “The problem with some of the commercial solutions is capability and integration,” he explained. “The devices are incredibly difficult and expensive to integrate within medical systems, particularly in public hospitals.” He also pointed to user interface shortcomings and the lack of alignment with established medical protocols.  According to Sibřina, IKEM VRLab’s offering is a versatile and scalable VR system that is simple to use and customizable to different surgical disciplines. He described it as “Zoom for 3D planning,” enabling live virtual collaboration between medical professionals. It leverages joint CT and MRI acquisition models, developed with IKEM’s medical physicists and radiologists. Data from patient scans is converted into interactive digital reconstructions that can be leveraged for analysis and surgical planning.  IKEM VRLab also offers a virtual “Fitting Room,” which allows surgeons to assess whether a donor’s organ size matches the recipient’s body. A digital model is created for every deceased donor and live recipient’s body, enabling surgeons to perform the size allocation assessments.  Sibřina explained that this capability significantly reduces the number of recipients who would otherwise fail to be matched with a suitable donor. For example, 262 deceased liver donors have been processed for Fitting Room size allocations by IKEM VRLab. In 27 instances, the VR Fitting Room prevented potential recipients from being skipped in the waiting list based on standard biometrics, CT axis measurements, and BMI ratios.                          Overall, 941 patient-specific visualizations have been performed using Sibřina’s technology. 285were for liver recipients, 311for liver donors, and 299for liver resection. Living liver donors account for 59cases, and split/reduced donors for 21.           A forum attendee using Materialise’s Mimics software in augmented reality. Photo via Materialise. Personalized healthcare: 3D printing implants and surgical guides  Beyond surgical planning and 3D visualisation, Materialise Mimics software supports the design and production of patient-specific implants and surgical guides. The company conducts healthcare contract manufacturing at its Leuven HQ and medical 3D printing facility in Plymouth, Michigan.  Hospitals can design patient-specific medical devices in-house or collaborate with Materialise’s clinical engineers to develop custom components. Materialise then 3D prints these devices and ships them for clinical use. The Belgian company, headed by CEO Brigitte de Vet-Veithen, produces around 280,000 custom medical instruments each year, with 160,000 destined for the US market. These include personalised titanium cranio-maxillofacialimplants for facial reconstruction and colour-coded surgical guides. Poole Hospital’s 3D specialists, Sian Campbell and Poppy Taylor-Crawford, shared how their team has adopted Materialise software to support complex CMF surgeries. Since acquiring the platform in 2022, they have developed digital workflows for planning and 3D printing patient-specific implants and surgical guides in 14 cases, particularly for facial reconstruction.  Campbell and Taylor-Crawford begin their workflow by importing patient CT and MRI data into Materialise’s Mimics Enlight CMF software. Automated tools handle initial segmentation, tumour resection planning, and the creation of cutting planes. For more complex cases involving fibula or scapula grafts, the team adapts these workflows to ensure precise alignment and fit of the bone graft within the defect. Next, the surgical plan and anatomical data are transferred to Materialise 3-matic, where the team designs patient-specific resection guides, reconstruction plates, and implants. These designs are refined through close collaboration with surgeons, incorporating feedback to optimise geometry and fit. Virtual fit checks verify guide accuracy, while further analysis ensures compatibility with surgical instruments and operating constraints. Once validated, the guides and implants are 3D printed for surgery. According to Campbell and Taylor-Crawford, these custom devices enable more accurate resections and implant placements. This improves surgical alignment and reduces theatre time by minimising intraoperative adjustments. An example of the cranio-maxillofacial implants and surgical guides 3D printed by Materialise. Photo by 3D Printing Industry Custom 3D printed implants are also fabricated at the Rizzoli Orthopaedic Institute in Bologna, Italy. Originally established as a motion analysis lab, the institute has expanded its expertise into surgical planning, biomechanical analysis, and now, personalized 3D printed implant design. Dr. Alberto Leardini, Director of the Movement Analysis Laboratory, described his team’s patient-specific implant workflow. They combine CT and MRI scans to identify bone defects and tumour locations. Clinical engineers then use this data to build digital models and plan resections. They also design cutting guides and custom implants tailored to each patient’s anatomy. These designs are refined in collaboration with surgeons before being outsourced to manufacturing partners for production. Importantly, this workflow internalizes design and planning phases. By hosting engineering and clinical teams together on-site, they aim to streamline decision-making and reduce lead times. Once the digital design is finalised, only the additive manufacturing step is outsourced, ensuring “zero distance” collaboration between teams.  Dr. Leardini emphasised that this approach improves clinical outcomes and promises economic benefits. While custom implants require more imaging and upfront planning, they reduce time in the operating theatre, shorten hospital stays, and minimise patient transfers.  After a full day of presentations inside the Irish College’s eighteenth-century chapel, the consensus was clear. 3D technology is not a niche capability reserved for high-end procedures, but a valuable tool enhancing everyday care for thousands of patients globally. From faster surgeries to cost savings and personalized treatments, hospitals are increasingly embedding 3D technology into routine care. Materialise’s software sits at the heart of this shift, enabling clinicians to deliver safer, smarter, and more efficient healthcare.  Take the 3DPI Reader Survey – shape the future of AM reporting in under 5 minutes. Read all the 3D printing news from RAPID + TCT 2025 Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows 3D printed anatomical models at Materialise HQ in Leuven. Photo by 3D Printing Industry. #how #elevates #healthcare #insights #materialise
    3DPRINTINGINDUSTRY.COM
    How AM Elevates Healthcare: Insights from the Materialise 3D Printing in Hospitals Forum 2025
    The cobbled streets and centuries-old university halls of Leuven recently served as a picturesque backdrop for the Materialise 3D Printing in Hospitals Forum 2025. Belgium’s Flemish Brabant capital hosted the annual meeting, which has become a key gathering for the medical 3D printing community since its launch in 2017. This year, 140 international healthcare professionals convened for two days of talks, workshops, and lively discussion on how Materialise’s software enhances patient care. The Forum’s opening day, hosted at Leuven’s historic Irish College, featured 16 presentations by 18 healthcare clinicians and medical 3D printing experts.  While often described as the future of medicine, personalized healthcare has already become routine in many clinical settings. Speakers emphasized that 3D printing is no longer merely a “cool” innovation, but an essential tool that improves patient outcomes. “Personalized treatment is not just a vision for the future,” said Koen Peters, Executive Vice President Medical at Materialise. “It’s a reality we’re building together every day.” During the forum, practitioners and clinical engineers demonstrated the critical role of Materialise’s software in medical workflows. Presentations highlighted value across a wide range of procedures, from brain tumour removal and organ transplantation to the separation of conjoined twins and maxillofacial implant surgeries. Several use cases demonstrated how 3D technology can reduce surgery times by up to four times, enhance patient recovery, and cut hospital costs by almost £6,000 per case.      140 visitors attended the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. Digital simulation and 3D printing slash operating times  Headquartered a few miles outside Leuven’s medieval center, Materialise is a global leader in medical 3D printing and digital planning. Its Mimics software suite automatically converts CT and MRI scans into detailed 3D models. Clinicians use these tools to prepare for procedures, analyse anatomy, and create patient-specific models that enhance surgical planning. So far, Materialise software has supported more than 500,000 patients and analysed over 6 million medical scans. One case that generated notable interest among the Forum’s attendees was that of Lisa Ferrie and Jiten Parmar from Leeds General Infirmary. The pair worked alongside Asim Sheikh, a Consultant Skullbase and Neurovascular Neurosurgeon, to conduct the UK’s first “coach door osteotomy” on Ruvimbo Kaviya, a 40-year-old nurse from Leeds.  This novel keyhole surgery successfully removed a brain tumor from Kaviya’s cavernous sinus, a hard-to-reach area behind the eyes. Most surgeries of this kind require large incisions and the removal of substantial skull sections, resulting in extended recovery time and the risk of postoperative complications. Such an approach would have presented serious risks for removing Kaviya’s tumor, which “was in a complex area surrounded by a lot of nerves,” explained Parmar, a Consultant in Maxillofacial Surgery.    Instead, the Leeds-based team uses a minimally invasive technique that requires only a 1.5 cm incision near the side of Ravimbo’s eyelid. A small section of skull bone was then shifted sideways and backward, much like a coach door sliding open, to create an access point for tumor removal. Following the procedure, Ravimbo recovered in a matter of days and was left with only a 6 mm scar at the incision point.  Materialise software played a vital role in facilitating this novel procedure. Ferrie is a Biomedical Engineer and 3D Planning Service Lead at Leeds Teaching Hospitals NHS Trust. She used mimics to convert medical scans into digital 3D models of Ravimbo’s skull. This allowed her team to conduct “virtual surgical planning” and practice the procedure in three dimensions, “to see if it’s going to work as we expect.”  Ferrie also fabricated life-sized, polyjet 3D printed anatomical models of Ravimbo’s skull for more hands-on surgical preparation. Sheikh and Parmar used these models in the hospital’s cadaver lab to rehearse the procedure until they were confident of a successful outcome. This 3D printing-enabled approach has since been repeated for additional cases, unlocking a new standard of care for patients with previously inoperable brain tumors.  The impact of 3D planning is striking. Average operating times fell from 8-12 hours to just 2-3 hours, and average patient discharge times dropped from 7-10 days to 2-3 days. These efficiencies translated into cost savings of £1,780 to £5,758 per case, while additional surgical capacity generated an average of £11,226 in income per operating list. Jiten Parmar (right) and Lisa Ferrie (left) presenting at the Materialise 3D Printing in Hospitals Forum 2025. Photo via Materialise. Dr. Davide Curione also discussed the value of virtual planning and 3D printing for surgical procedures. Based at Bambino Gesù Pediatric Hospital in Rome, the radiologist’s team conducts 3D modeling, visualization, simulation, and 3D printing.  One case involved thoraco-omphalopagus twins joined at the chest and abdomen. Curione’s team 3D printed a multi-color anatomical model of the twins’ anatomy, which he called “the first of its kind for complexity in Italy.” Fabricated in transparent resin, the model offered a detailed view of the twins’ internal anatomy, including the rib cage, lungs, and cardiovascular system. Attention then turned to the liver. The team built a digital reconstruction to simulate the optimal resection planes for the general separation and the hepatic splitting procedure. This was followed by a second multi-colour 3D printed model highlighting the organ’s vascularisation. These resources improved surgical planning, cutting operating time by 30%, and enabled a successful separation, with no major complications reported two years post-operation. Dr. Davide Curione’s workflow for creating a 3D printed model of thoraco-omphalopagus twins using Mimics. Image via Frontiers in Physiology. VR-enabled surgery enhances organ transplants   Materialise’s Mimics software can also be used in extended reality (XR), allowing clinicians to interact more intuitively with 3D anatomical models and medical images. By using off-the-shelf virtual reality (VR) and augmented reality (AR) headsets, healthcare professionals can more closely examine complex structures in an immersive environment. Dr. David Sibřina is a Principal Researcher and Developer for the VRLab team at Prague’s Institute for Clinical and Experimental Medicine (IKEM). He leads efforts to accelerate the clinical adoption of VR and AR in organ transplantation, surgical planning, and surgical guidance.  The former Forbes 30 Under 30 honouree explained that since 2016, IKEM’s 3D printing lab has focused on producing anatomical models to support liver and kidney donor programmes. His lab also fabricates 3D printed anatomical models of ventricles and aneurysms for clinical use.  However, Sibřina’s team recently became overwhelmed by high demand for physical models, with surgeons requesting additional 3D model processing options. This led Sibřina to create the IKEM VRLab, offering XR capabilities to help surgeons plan and conduct complex transplantation surgeries and resection procedures.      When turning to XR, Sibřina’s lab opted against adopting a ready-made software solution, instead developing its own from scratch. “The problem with some of the commercial solutions is capability and integration,” he explained. “The devices are incredibly difficult and expensive to integrate within medical systems, particularly in public hospitals.” He also pointed to user interface shortcomings and the lack of alignment with established medical protocols.  According to Sibřina, IKEM VRLab’s offering is a versatile and scalable VR system that is simple to use and customizable to different surgical disciplines. He described it as “Zoom for 3D planning,” enabling live virtual collaboration between medical professionals. It leverages joint CT and MRI acquisition models, developed with IKEM’s medical physicists and radiologists. Data from patient scans is converted into interactive digital reconstructions that can be leveraged for analysis and surgical planning.  IKEM VRLab also offers a virtual “Fitting Room,” which allows surgeons to assess whether a donor’s organ size matches the recipient’s body. A digital model is created for every deceased donor and live recipient’s body, enabling surgeons to perform the size allocation assessments.  Sibřina explained that this capability significantly reduces the number of recipients who would otherwise fail to be matched with a suitable donor. For example, 262 deceased liver donors have been processed for Fitting Room size allocations by IKEM VRLab. In 27 instances, the VR Fitting Room prevented potential recipients from being skipped in the waiting list based on standard biometrics, CT axis measurements, and BMI ratios.                          Overall, 941 patient-specific visualizations have been performed using Sibřina’s technology. 285 (28%) were for liver recipients, 311 (31%) for liver donors, and 299 (23%) for liver resection. Living liver donors account for 59 (6%) cases, and split/reduced donors for 21 (2%).           A forum attendee using Materialise’s Mimics software in augmented reality (AR). Photo via Materialise. Personalized healthcare: 3D printing implants and surgical guides  Beyond surgical planning and 3D visualisation, Materialise Mimics software supports the design and production of patient-specific implants and surgical guides. The company conducts healthcare contract manufacturing at its Leuven HQ and medical 3D printing facility in Plymouth, Michigan.  Hospitals can design patient-specific medical devices in-house or collaborate with Materialise’s clinical engineers to develop custom components. Materialise then 3D prints these devices and ships them for clinical use. The Belgian company, headed by CEO Brigitte de Vet-Veithen, produces around 280,000 custom medical instruments each year, with 160,000 destined for the US market. These include personalised titanium cranio-maxillofacial (CMF) implants for facial reconstruction and colour-coded surgical guides. Poole Hospital’s 3D specialists, Sian Campbell and Poppy Taylor-Crawford, shared how their team has adopted Materialise software to support complex CMF surgeries. Since acquiring the platform in 2022, they have developed digital workflows for planning and 3D printing patient-specific implants and surgical guides in 14 cases, particularly for facial reconstruction.  Campbell and Taylor-Crawford begin their workflow by importing patient CT and MRI data into Materialise’s Mimics Enlight CMF software. Automated tools handle initial segmentation, tumour resection planning, and the creation of cutting planes. For more complex cases involving fibula or scapula grafts, the team adapts these workflows to ensure precise alignment and fit of the bone graft within the defect. Next, the surgical plan and anatomical data are transferred to Materialise 3-matic, where the team designs patient-specific resection guides, reconstruction plates, and implants. These designs are refined through close collaboration with surgeons, incorporating feedback to optimise geometry and fit. Virtual fit checks verify guide accuracy, while further analysis ensures compatibility with surgical instruments and operating constraints. Once validated, the guides and implants are 3D printed for surgery. According to Campbell and Taylor-Crawford, these custom devices enable more accurate resections and implant placements. This improves surgical alignment and reduces theatre time by minimising intraoperative adjustments. An example of the cranio-maxillofacial implants and surgical guides 3D printed by Materialise. Photo by 3D Printing Industry Custom 3D printed implants are also fabricated at the Rizzoli Orthopaedic Institute in Bologna, Italy. Originally established as a motion analysis lab, the institute has expanded its expertise into surgical planning, biomechanical analysis, and now, personalized 3D printed implant design. Dr. Alberto Leardini, Director of the Movement Analysis Laboratory, described his team’s patient-specific implant workflow. They combine CT and MRI scans to identify bone defects and tumour locations. Clinical engineers then use this data to build digital models and plan resections. They also design cutting guides and custom implants tailored to each patient’s anatomy. These designs are refined in collaboration with surgeons before being outsourced to manufacturing partners for production. Importantly, this workflow internalizes design and planning phases. By hosting engineering and clinical teams together on-site, they aim to streamline decision-making and reduce lead times. Once the digital design is finalised, only the additive manufacturing step is outsourced, ensuring “zero distance” collaboration between teams.  Dr. Leardini emphasised that this approach improves clinical outcomes and promises economic benefits. While custom implants require more imaging and upfront planning, they reduce time in the operating theatre, shorten hospital stays, and minimise patient transfers.  After a full day of presentations inside the Irish College’s eighteenth-century chapel, the consensus was clear. 3D technology is not a niche capability reserved for high-end procedures, but a valuable tool enhancing everyday care for thousands of patients globally. From faster surgeries to cost savings and personalized treatments, hospitals are increasingly embedding 3D technology into routine care. Materialise’s software sits at the heart of this shift, enabling clinicians to deliver safer, smarter, and more efficient healthcare.  Take the 3DPI Reader Survey – shape the future of AM reporting in under 5 minutes. Read all the 3D printing news from RAPID + TCT 2025 Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows 3D printed anatomical models at Materialise HQ in Leuven. Photo by 3D Printing Industry.
    0 Commentarii 0 Distribuiri
Sponsorizeaza Paginile