-
- EXPLORAR
-
-
-
-
fxphd.com is the leader in pro online training for vfx, motion graphics, and production. Turn to fxguide for vfx news and fxguidetv & audio podcasts.
Atualizações recentes
-
fxpodcast: the making of the immersive Apple Vision Pro film Bono: Stories of Surrender
In this episode of the fxpodcast, we go behind the scenes with The-Artery, the New York-based creative studio that brought this ambitious vision to life. We speak with Founder and CCO Vico Sharabani, along with Elad Offer, the project’s Creative Director, about what it took to craft this unprecedented experience. From conceptual direction to VFX and design, The-Artery was responsible for the full production pipeline of the AVP edition.
Bono’s memoir Surrender: 40 Songs, One Story has taken on new life—this time as a groundbreaking immersive cinematic experience tailored specifically for the Apple Vision Pro. Titled Bono: Stories of Surrender, the project transforms his personal journey of love, loss, and legacy into a first-of-its-kind Apple Immersive Video.
The-Artery,Founder Vico Sharabani during post-production.
This is far more than a stereo conversion of a traditional film. Designed natively for the Apple Vision Pro, Bono: Stories of Surrenderplaces viewers directly on stage with Bono, surrounding them in a deeply intimate audiovisual journey. Shot and mastered at a staggering 14K by 7K resolution, in 180-degree stereoscopic video at 90 frames per second, the format pushes the limits of current storytelling, running at data rates nearly 50 times higher than conventional content. The immersive trailer itself diverges significantly from its traditional counterpart, using novel cinematic language, spatial cues, and temporal transitions unique to Apple’s new medium.
This marks the first feature-length film available in Apple Immersive Video, and a powerful statement on Bono’s and U2’s continued embrace of innovation. Watch the video or listen to the audio podcast as we unpack the creative and technical challenges of building a film for a platform that didn’t exist just a year ago, and what it means for the future of immersive storytelling.
#fxpodcast #making #immersive #apple #visionfxpodcast: the making of the immersive Apple Vision Pro film Bono: Stories of SurrenderIn this episode of the fxpodcast, we go behind the scenes with The-Artery, the New York-based creative studio that brought this ambitious vision to life. We speak with Founder and CCO Vico Sharabani, along with Elad Offer, the project’s Creative Director, about what it took to craft this unprecedented experience. From conceptual direction to VFX and design, The-Artery was responsible for the full production pipeline of the AVP edition. Bono’s memoir Surrender: 40 Songs, One Story has taken on new life—this time as a groundbreaking immersive cinematic experience tailored specifically for the Apple Vision Pro. Titled Bono: Stories of Surrender, the project transforms his personal journey of love, loss, and legacy into a first-of-its-kind Apple Immersive Video. The-Artery,Founder Vico Sharabani during post-production. This is far more than a stereo conversion of a traditional film. Designed natively for the Apple Vision Pro, Bono: Stories of Surrenderplaces viewers directly on stage with Bono, surrounding them in a deeply intimate audiovisual journey. Shot and mastered at a staggering 14K by 7K resolution, in 180-degree stereoscopic video at 90 frames per second, the format pushes the limits of current storytelling, running at data rates nearly 50 times higher than conventional content. The immersive trailer itself diverges significantly from its traditional counterpart, using novel cinematic language, spatial cues, and temporal transitions unique to Apple’s new medium. This marks the first feature-length film available in Apple Immersive Video, and a powerful statement on Bono’s and U2’s continued embrace of innovation. Watch the video or listen to the audio podcast as we unpack the creative and technical challenges of building a film for a platform that didn’t exist just a year ago, and what it means for the future of immersive storytelling. #fxpodcast #making #immersive #apple #visionWWW.FXGUIDE.COMfxpodcast: the making of the immersive Apple Vision Pro film Bono: Stories of SurrenderIn this episode of the fxpodcast, we go behind the scenes with The-Artery, the New York-based creative studio that brought this ambitious vision to life. We speak with Founder and CCO Vico Sharabani, along with Elad Offer, the project’s Creative Director, about what it took to craft this unprecedented experience. From conceptual direction to VFX and design, The-Artery was responsible for the full production pipeline of the AVP edition. Bono’s memoir Surrender: 40 Songs, One Story has taken on new life—this time as a groundbreaking immersive cinematic experience tailored specifically for the Apple Vision Pro. Titled Bono: Stories of Surrender (Immersive), the project transforms his personal journey of love, loss, and legacy into a first-of-its-kind Apple Immersive Video. The-Artery, (R) Founder Vico Sharabani during post-production. This is far more than a stereo conversion of a traditional film. Designed natively for the Apple Vision Pro, Bono: Stories of Surrender (Immersive) places viewers directly on stage with Bono, surrounding them in a deeply intimate audiovisual journey. Shot and mastered at a staggering 14K by 7K resolution, in 180-degree stereoscopic video at 90 frames per second, the format pushes the limits of current storytelling, running at data rates nearly 50 times higher than conventional content. The immersive trailer itself diverges significantly from its traditional counterpart, using novel cinematic language, spatial cues, and temporal transitions unique to Apple’s new medium. This marks the first feature-length film available in Apple Immersive Video, and a powerful statement on Bono’s and U2’s continued embrace of innovation. Watch the video or listen to the audio podcast as we unpack the creative and technical challenges of building a film for a platform that didn’t exist just a year ago, and what it means for the future of immersive storytelling.Faça o login para curtir, compartilhar e comentar! -
VFXShow 296: Mission: Impossible – The Final Reckoning
Ethan Hunt and the IMF team race against time to find a rogue artificial intelligencethat can destroy mankind.
AI, IMF & VFX: A Mission Worth Rendering
In the latest episode of The VFXShow podcast, hosts Matt Wallin, Jason Diamond, and Mike Seymour reunite to dissect the spectacle, story, and seamless visual effects of Mission: Impossible – The Final Reckoning.
As the eighth entry in the franchise, this chapter serves as a high-stakes, high-altitude crescendo to Tom Cruise’s nearly 30-year run as Ethan Hunt, the relentless agent of the Impossible Mission Force.
Cruise Control: When Practical Meets Pixel
While the narrative revolves around the existential threat of a rogue AI known as The Entity, the real heart of the film lies in its bold commitment to visceral, real-world action. The VFX team discusses how Cruise’s ongoing devotion to doing his own death-defying stunts, from leaping between bi-planes to diving into the wreckage of a sunken submarine, paradoxically increases the importance of invisible VFX. From seamless digital stitching to background replacements and subtle physics enhancements, the effects work had to serve the story without ever betraying the sense of raw, in-camera danger.
Matt, Jason, and Mike explore how VFX in this film plays a critical supporting role, cleaning up stunts, compositing dangerous sequences, and selling the illusion of globe-spanning chaos.
Whether it’s simulating the collapse of a Cold War-era submarine, managing intricate water dynamics in Ethan’s deep-sea dive, or integrating AI-driven visualisations of nuclear catastrophe, the film leans heavily on sophisticated post work to make Cruise’s practical stunts feel even more grounded and believable.
The team also reflects on the thematic evolution of the franchise. While the plot may twist through layers of espionage, betrayal, and digital apocalypse, including face-offs with Gabriel, doomsday cults, and geopolitical brinkmanship, it is not the team’s favourite MI film. And yet, they note, even as the story veers into sci-fi territory with sentient algorithms and bunker-bound AI traps, the VFX never overshadows the tactile performance at the film’s centre.
Falling, Flying, Faking It Beautifully
For fans of the franchise, visual effects, or just adrenaline-fueled cinema, this episode offers a thoughtful cinematic critique on how modern VFX artistry and old-school stuntwork can coexist to save a film that has lost its driving narrative direction.
This week in our lineup isMatt Wallin * @mattwallin www.mattwallin.com
Follow Matt on Mastodon: @Jason Diamond @jasondiamond www.thediamondbros.com
Mike Seymour @mikeseymour www.fxguide.com. + @mikeseymour
Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
#vfxshow #mission #impossible #final #reckoningVFXShow 296: Mission: Impossible – The Final ReckoningEthan Hunt and the IMF team race against time to find a rogue artificial intelligencethat can destroy mankind. AI, IMF & VFX: A Mission Worth Rendering In the latest episode of The VFXShow podcast, hosts Matt Wallin, Jason Diamond, and Mike Seymour reunite to dissect the spectacle, story, and seamless visual effects of Mission: Impossible – The Final Reckoning. As the eighth entry in the franchise, this chapter serves as a high-stakes, high-altitude crescendo to Tom Cruise’s nearly 30-year run as Ethan Hunt, the relentless agent of the Impossible Mission Force. Cruise Control: When Practical Meets Pixel While the narrative revolves around the existential threat of a rogue AI known as The Entity, the real heart of the film lies in its bold commitment to visceral, real-world action. The VFX team discusses how Cruise’s ongoing devotion to doing his own death-defying stunts, from leaping between bi-planes to diving into the wreckage of a sunken submarine, paradoxically increases the importance of invisible VFX. From seamless digital stitching to background replacements and subtle physics enhancements, the effects work had to serve the story without ever betraying the sense of raw, in-camera danger. Matt, Jason, and Mike explore how VFX in this film plays a critical supporting role, cleaning up stunts, compositing dangerous sequences, and selling the illusion of globe-spanning chaos. Whether it’s simulating the collapse of a Cold War-era submarine, managing intricate water dynamics in Ethan’s deep-sea dive, or integrating AI-driven visualisations of nuclear catastrophe, the film leans heavily on sophisticated post work to make Cruise’s practical stunts feel even more grounded and believable. The team also reflects on the thematic evolution of the franchise. While the plot may twist through layers of espionage, betrayal, and digital apocalypse, including face-offs with Gabriel, doomsday cults, and geopolitical brinkmanship, it is not the team’s favourite MI film. And yet, they note, even as the story veers into sci-fi territory with sentient algorithms and bunker-bound AI traps, the VFX never overshadows the tactile performance at the film’s centre. Falling, Flying, Faking It Beautifully For fans of the franchise, visual effects, or just adrenaline-fueled cinema, this episode offers a thoughtful cinematic critique on how modern VFX artistry and old-school stuntwork can coexist to save a film that has lost its driving narrative direction. This week in our lineup isMatt Wallin * @mattwallin www.mattwallin.com Follow Matt on Mastodon: @Jason Diamond @jasondiamond www.thediamondbros.com Mike Seymour @mikeseymour www.fxguide.com. + @mikeseymour Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen. #vfxshow #mission #impossible #final #reckoningWWW.FXGUIDE.COMVFXShow 296: Mission: Impossible – The Final ReckoningEthan Hunt and the IMF team race against time to find a rogue artificial intelligence (why is AI always the bad guy now if films? ) that can destroy mankind. AI, IMF & VFX: A Mission Worth Rendering In the latest episode of The VFXShow podcast, hosts Matt Wallin, Jason Diamond, and Mike Seymour reunite to dissect the spectacle, story, and seamless visual effects of Mission: Impossible – The Final Reckoning. As the eighth entry in the franchise, this chapter serves as a high-stakes, high-altitude crescendo to Tom Cruise’s nearly 30-year run as Ethan Hunt, the relentless agent of the Impossible Mission Force. Cruise Control: When Practical Meets Pixel While the narrative revolves around the existential threat of a rogue AI known as The Entity, the real heart of the film lies in its bold commitment to visceral, real-world action. The VFX team discusses how Cruise’s ongoing devotion to doing his own death-defying stunts, from leaping between bi-planes to diving into the wreckage of a sunken submarine, paradoxically increases the importance of invisible VFX. From seamless digital stitching to background replacements and subtle physics enhancements, the effects work had to serve the story without ever betraying the sense of raw, in-camera danger. Matt, Jason, and Mike explore how VFX in this film plays a critical supporting role, cleaning up stunts, compositing dangerous sequences, and selling the illusion of globe-spanning chaos. Whether it’s simulating the collapse of a Cold War-era submarine, managing intricate water dynamics in Ethan’s deep-sea dive, or integrating AI-driven visualisations of nuclear catastrophe, the film leans heavily on sophisticated post work to make Cruise’s practical stunts feel even more grounded and believable. The team also reflects on the thematic evolution of the franchise. While the plot may twist through layers of espionage, betrayal, and digital apocalypse, including face-offs with Gabriel, doomsday cults, and geopolitical brinkmanship, it is not the team’s favourite MI film. And yet, they note, even as the story veers into sci-fi territory with sentient algorithms and bunker-bound AI traps, the VFX never overshadows the tactile performance at the film’s centre. Falling, Flying, Faking It Beautifully For fans of the franchise, visual effects, or just adrenaline-fueled cinema, this episode offers a thoughtful cinematic critique on how modern VFX artistry and old-school stuntwork can coexist to save a film that has lost its driving narrative direction. This week in our lineup is (or are they really??) Matt Wallin * @mattwallin www.mattwallin.com Follow Matt on Mastodon: @[email protected] Jason Diamond @jasondiamond www.thediamondbros.com Mike Seymour @mikeseymour www.fxguide.com. + @mikeseymour Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.0 Comentários 0 Compartilhamentos -
fxpodcast: Landman’s special effects and explosions with Garry Elmendorf
Garry Elmendorf isn’t just a special effects supervisor, he’s a master of controlled chaos. With over 50 years in the business, from Logan’s Run in the ’70s to the high-octane worlds of Yellowstone, 1883, 1923, and Landman. Elmendorf has shaped the visual DNA of Taylor Sheridan’s TV empire with a mix of old-school craft and jaw-dropping spectacle. In the latest fxpodcast, Garry joins us to break down the physical effects work behind some of the most explosive moments in Landman.
As regular listeners know, we occasionally conduct interviews with individuals working in SFX, rather than with VFX. Garry’s work is not the kind of work that’s built in post and his approach is grounded in real-world physics, practical fabrication, and deeply collaborative on-set discipline. Take the aircraft crash in Landman’s premiere: there was no CGI here, other than comp cleanup. It was shot with just a Frankenstein plane built from scrap, rigged with trip triggers and detonated in real time.
Or the massive oil rig explosion, which involved custom pump jacks, 2,000 gallons of burning diesel and gasoline, propane cannons, and tightly timed pyro rigs. The scale is cinematic. Safety, Garry insists, is always his first concern, but what keeps him up at night is timing. One mistimed trigger, one failed ignition, and the shot is ruined.
In our conversation, Garry shares incredible behind-the-scenes insights into how these sequences are devised, tested, and executed, whether it’s launching a van skyward via an air cannon or walking Billy Bob Thornton within 40 feet of a roaring fireball. There’s a tactile intensity to his work, and a trust among his crew that only comes from decades of working under pressure. From assembling a crashable aircraft out of mismatched parts to rigging oil rig explosions with precise control over flame size, duration, and safety, his work is rooted in mechanical problem-solving and coordination across departments.
In Landman, whether coordinating multiple fuel types to achieve specific smoke density or calculating safe clearances for actors and crew around high-temperature pyrotechnics, Elmendorf’s contribution reflects a commitment to realism and repeatability on set. The result is a series where the physicality of explosions, crashes, and fire-driven action carries weight, both in terms of production logistics and visual impact.
Listen to the full interview on the fxpodcast.
#fxpodcast #landmans #special #effects #explosionsfxpodcast: Landman’s special effects and explosions with Garry ElmendorfGarry Elmendorf isn’t just a special effects supervisor, he’s a master of controlled chaos. With over 50 years in the business, from Logan’s Run in the ’70s to the high-octane worlds of Yellowstone, 1883, 1923, and Landman. Elmendorf has shaped the visual DNA of Taylor Sheridan’s TV empire with a mix of old-school craft and jaw-dropping spectacle. In the latest fxpodcast, Garry joins us to break down the physical effects work behind some of the most explosive moments in Landman. As regular listeners know, we occasionally conduct interviews with individuals working in SFX, rather than with VFX. Garry’s work is not the kind of work that’s built in post and his approach is grounded in real-world physics, practical fabrication, and deeply collaborative on-set discipline. Take the aircraft crash in Landman’s premiere: there was no CGI here, other than comp cleanup. It was shot with just a Frankenstein plane built from scrap, rigged with trip triggers and detonated in real time. Or the massive oil rig explosion, which involved custom pump jacks, 2,000 gallons of burning diesel and gasoline, propane cannons, and tightly timed pyro rigs. The scale is cinematic. Safety, Garry insists, is always his first concern, but what keeps him up at night is timing. One mistimed trigger, one failed ignition, and the shot is ruined. In our conversation, Garry shares incredible behind-the-scenes insights into how these sequences are devised, tested, and executed, whether it’s launching a van skyward via an air cannon or walking Billy Bob Thornton within 40 feet of a roaring fireball. There’s a tactile intensity to his work, and a trust among his crew that only comes from decades of working under pressure. From assembling a crashable aircraft out of mismatched parts to rigging oil rig explosions with precise control over flame size, duration, and safety, his work is rooted in mechanical problem-solving and coordination across departments. In Landman, whether coordinating multiple fuel types to achieve specific smoke density or calculating safe clearances for actors and crew around high-temperature pyrotechnics, Elmendorf’s contribution reflects a commitment to realism and repeatability on set. The result is a series where the physicality of explosions, crashes, and fire-driven action carries weight, both in terms of production logistics and visual impact. Listen to the full interview on the fxpodcast. #fxpodcast #landmans #special #effects #explosionsWWW.FXGUIDE.COMfxpodcast: Landman’s special effects and explosions with Garry ElmendorfGarry Elmendorf isn’t just a special effects supervisor, he’s a master of controlled chaos. With over 50 years in the business, from Logan’s Run in the ’70s to the high-octane worlds of Yellowstone, 1883, 1923, and Landman. Elmendorf has shaped the visual DNA of Taylor Sheridan’s TV empire with a mix of old-school craft and jaw-dropping spectacle. In the latest fxpodcast, Garry joins us to break down the physical effects work behind some of the most explosive moments in Landman. As regular listeners know, we occasionally conduct interviews with individuals working in SFX, rather than with VFX. Garry’s work is not the kind of work that’s built in post and his approach is grounded in real-world physics, practical fabrication, and deeply collaborative on-set discipline. Take the aircraft crash in Landman’s premiere: there was no CGI here, other than comp cleanup. It was shot with just a Frankenstein plane built from scrap, rigged with trip triggers and detonated in real time. Or the massive oil rig explosion, which involved custom pump jacks, 2,000 gallons of burning diesel and gasoline, propane cannons, and tightly timed pyro rigs. The scale is cinematic. Safety, Garry insists, is always his first concern, but what keeps him up at night is timing. One mistimed trigger, one failed ignition, and the shot is ruined. In our conversation, Garry shares incredible behind-the-scenes insights into how these sequences are devised, tested, and executed, whether it’s launching a van skyward via an air cannon or walking Billy Bob Thornton within 40 feet of a roaring fireball. There’s a tactile intensity to his work, and a trust among his crew that only comes from decades of working under pressure. From assembling a crashable aircraft out of mismatched parts to rigging oil rig explosions with precise control over flame size, duration, and safety, his work is rooted in mechanical problem-solving and coordination across departments. In Landman, whether coordinating multiple fuel types to achieve specific smoke density or calculating safe clearances for actors and crew around high-temperature pyrotechnics, Elmendorf’s contribution reflects a commitment to realism and repeatability on set. The result is a series where the physicality of explosions, crashes, and fire-driven action carries weight, both in terms of production logistics and visual impact. Listen to the full interview on the fxpodcast.0 Comentários 0 Compartilhamentos -
The Wheel of Time postviz reel from Proof
For Season 3 of Amazon’s The Wheel of Time, Proof Inc. reimagined post-visualization, developing an innovative “Sketchvis” pipeline that blurred the boundaries between previs, postvis, and final VFX. Under Supervisor Steve Harrison, Proof created over 35 minutes of intricate, stylized visualizations across all eight episodes, establishing an expressive visual foundation for the series’ complex magical elements known as “channeling.”
Proof’s Sketchvis combined 2D artistry with sophisticated 3D execution using Maya and Nuke, complemented by vibrant glows and intricate distortion effects. Each spell’s distinct energy was carefully choreographed, whether corkscrewing beams of power or serpentine streams of water, closely aligning with the narrative’s elemental logic and dramatically influencing the show’s pacing and visual storytelling.
Working closely in daily collaboration with Production and VFX Supervisor Andy Scrase, the Proof team took on design challenges typically reserved for final VFX vendors like Framestore and DNEG. This proactive approach allowed Proof to define not only the aesthetic but also the motion logic of key magical sequences, creating a precise roadmap that remarkably mirrors what audiences will experience in the final episodes.
For Proof, traditionally known for character animation and environmental previs, this venture into nuanced effect design and movement choreography represented both a creative challenge and a significant expansion of their artistic repertoire, adding to the visual texture of The Wheel of Time and pushing post-visualization into compelling new creative territory. The team contributed to all eight episodes with a core team of six artists. Proof’s ability to step beyond previs and postvis into effect design and movement development made them a key partner, enhancing in-camera performances and helping shape the visual language of the series.
#wheel #time #postviz #reel #proofThe Wheel of Time postviz reel from ProofFor Season 3 of Amazon’s The Wheel of Time, Proof Inc. reimagined post-visualization, developing an innovative “Sketchvis” pipeline that blurred the boundaries between previs, postvis, and final VFX. Under Supervisor Steve Harrison, Proof created over 35 minutes of intricate, stylized visualizations across all eight episodes, establishing an expressive visual foundation for the series’ complex magical elements known as “channeling.” Proof’s Sketchvis combined 2D artistry with sophisticated 3D execution using Maya and Nuke, complemented by vibrant glows and intricate distortion effects. Each spell’s distinct energy was carefully choreographed, whether corkscrewing beams of power or serpentine streams of water, closely aligning with the narrative’s elemental logic and dramatically influencing the show’s pacing and visual storytelling. Working closely in daily collaboration with Production and VFX Supervisor Andy Scrase, the Proof team took on design challenges typically reserved for final VFX vendors like Framestore and DNEG. This proactive approach allowed Proof to define not only the aesthetic but also the motion logic of key magical sequences, creating a precise roadmap that remarkably mirrors what audiences will experience in the final episodes. For Proof, traditionally known for character animation and environmental previs, this venture into nuanced effect design and movement choreography represented both a creative challenge and a significant expansion of their artistic repertoire, adding to the visual texture of The Wheel of Time and pushing post-visualization into compelling new creative territory. The team contributed to all eight episodes with a core team of six artists. Proof’s ability to step beyond previs and postvis into effect design and movement development made them a key partner, enhancing in-camera performances and helping shape the visual language of the series. #wheel #time #postviz #reel #proofThe Wheel of Time postviz reel from ProofFor Season 3 of Amazon’s The Wheel of Time, Proof Inc. reimagined post-visualization, developing an innovative “Sketchvis” pipeline that blurred the boundaries between previs, postvis, and final VFX. Under Supervisor Steve Harrison, Proof created over 35 minutes of intricate, stylized visualizations across all eight episodes, establishing an expressive visual foundation for the series’ complex magical elements known as “channeling.” Proof’s Sketchvis combined 2D artistry with sophisticated 3D execution using Maya and Nuke, complemented by vibrant glows and intricate distortion effects. Each spell’s distinct energy was carefully choreographed, whether corkscrewing beams of power or serpentine streams of water, closely aligning with the narrative’s elemental logic and dramatically influencing the show’s pacing and visual storytelling. Working closely in daily collaboration with Production and VFX Supervisor Andy Scrase, the Proof team took on design challenges typically reserved for final VFX vendors like Framestore and DNEG. This proactive approach allowed Proof to define not only the aesthetic but also the motion logic of key magical sequences, creating a precise roadmap that remarkably mirrors what audiences will experience in the final episodes. For Proof, traditionally known for character animation and environmental previs, this venture into nuanced effect design and movement choreography represented both a creative challenge and a significant expansion of their artistic repertoire, adding to the visual texture of The Wheel of Time and pushing post-visualization into compelling new creative territory. The team contributed to all eight episodes with a core team of six artists. Proof’s ability to step beyond previs and postvis into effect design and movement development made them a key partner, enhancing in-camera performances and helping shape the visual language of the series. -
fxpodcast: Chefs of data – Etoile and machine learning
On this episode of the fxpodcast, we have an exclusive behind-the-scenes look at the visual effects work for Amazon’s Étoile, featuring an in-depth discussion with David Gaddie from Afterparty VFX. The team undertook the challenging face replacement work for the show’s ballet dancers, employing cutting-edge AI technology to seamlessly blend lead actors’ faces onto the performances of professional dance doubles.
Étoile, the latest series from Amy Sherman-Palladino, renowned creator of The Marvelous Mrs. Maisel and Gilmore Girls, centers on elite ballet dancers. To realize this vision, the show’s VFX supervisor, Lesley Robson Foster, engaged Afterparty VFX to research and develop innovative AI-driven solutions capable of handling the demanding visual complexities inherent in ballet sequences.
This task was notably difficult, as traditional deepfake tools are typically optimized for straightforward, frontal shots. Ballet, however, involves rapid spinning, flips, significant motion blur, and hair frequently obscuring faces, creating immense technical hurdles. Additionally, director Amy Sherman-Palladino preferred long, uninterrupted takes, some lasting nearly a full minute, eliminating conventional editing methods and cheats used to mask transitions between the actors and their dance doubles. Crucially, the final shots needed to authentically reflect the actors’ performances, rather than simply showcasing their doubles.
In this episode, we explore in-depth how David Gaddie and the Afterparty VFX team developed proprietary solutions tailored specifically to these unique challenges. Their process combined computer-generated imagery, advanced AI, meticulous data segmentation, extensive manual refinements, and significant artistic skill to achieve convincing, performance-driven visual effects.
#fxpodcast #chefs #data #etoile #machinefxpodcast: Chefs of data – Etoile and machine learningOn this episode of the fxpodcast, we have an exclusive behind-the-scenes look at the visual effects work for Amazon’s Étoile, featuring an in-depth discussion with David Gaddie from Afterparty VFX. The team undertook the challenging face replacement work for the show’s ballet dancers, employing cutting-edge AI technology to seamlessly blend lead actors’ faces onto the performances of professional dance doubles. Étoile, the latest series from Amy Sherman-Palladino, renowned creator of The Marvelous Mrs. Maisel and Gilmore Girls, centers on elite ballet dancers. To realize this vision, the show’s VFX supervisor, Lesley Robson Foster, engaged Afterparty VFX to research and develop innovative AI-driven solutions capable of handling the demanding visual complexities inherent in ballet sequences. This task was notably difficult, as traditional deepfake tools are typically optimized for straightforward, frontal shots. Ballet, however, involves rapid spinning, flips, significant motion blur, and hair frequently obscuring faces, creating immense technical hurdles. Additionally, director Amy Sherman-Palladino preferred long, uninterrupted takes, some lasting nearly a full minute, eliminating conventional editing methods and cheats used to mask transitions between the actors and their dance doubles. Crucially, the final shots needed to authentically reflect the actors’ performances, rather than simply showcasing their doubles. In this episode, we explore in-depth how David Gaddie and the Afterparty VFX team developed proprietary solutions tailored specifically to these unique challenges. Their process combined computer-generated imagery, advanced AI, meticulous data segmentation, extensive manual refinements, and significant artistic skill to achieve convincing, performance-driven visual effects. #fxpodcast #chefs #data #etoile #machineWWW.FXGUIDE.COMfxpodcast: Chefs of data – Etoile and machine learningOn this episode of the fxpodcast, we have an exclusive behind-the-scenes look at the visual effects work for Amazon’s Étoile, featuring an in-depth discussion with David Gaddie from Afterparty VFX. The team undertook the challenging face replacement work for the show’s ballet dancers, employing cutting-edge AI technology to seamlessly blend lead actors’ faces onto the performances of professional dance doubles. Étoile, the latest series from Amy Sherman-Palladino, renowned creator of The Marvelous Mrs. Maisel and Gilmore Girls, centers on elite ballet dancers. To realize this vision, the show’s VFX supervisor, Lesley Robson Foster, engaged Afterparty VFX to research and develop innovative AI-driven solutions capable of handling the demanding visual complexities inherent in ballet sequences. This task was notably difficult, as traditional deepfake tools are typically optimized for straightforward, frontal shots. Ballet, however, involves rapid spinning, flips, significant motion blur, and hair frequently obscuring faces, creating immense technical hurdles. Additionally, director Amy Sherman-Palladino preferred long, uninterrupted takes, some lasting nearly a full minute, eliminating conventional editing methods and cheats used to mask transitions between the actors and their dance doubles. Crucially, the final shots needed to authentically reflect the actors’ performances, rather than simply showcasing their doubles. In this episode, we explore in-depth how David Gaddie and the Afterparty VFX team developed proprietary solutions tailored specifically to these unique challenges. Their process combined computer-generated imagery, advanced AI, meticulous data segmentation, extensive manual refinements, and significant artistic skill to achieve convincing, performance-driven visual effects.0 Comentários 0 Compartilhamentos -
fxpodcast: Union VFX’s work on Black Mirror Season 7 – USS Callister
In this episode of the fxpodcast, we speak with David Schneider, senior DFX supervisor, and Jane Hayen, 2D supervisor from Union VFX, to discuss their extensive visual effects work on the USS Callister episode for Black Mirror season 7. From spacecraft interiors to stylised teleportation and frenetic dogfights in deep space, the duo outlined how their team brought The USS Callister back with upgraded tech, intricate referencing to the original episode, and a keen eye for the audience’s expectations.
In addition to the audio podcast, we’ve been creating YouTube videos of many of our fxpodcast episodes and this one is available in video form as well.
Returning to the world of Black Mirror’s iconic “USS Callister,” the latest Season 7 installment pushes visual storytelling into new territory. With over 215 shots, 50 of which are fully CGI. Union VFX stepped in to bring the pixel-perfect dystopia to life. The episode features some of the studio’s most ambitious work to date, blending nostalgic retro-futurism with cutting-edge CG in a cinematic-scale production that feels more like a feature film than episodic television.
Referencing the Original While Evolving the Visual Language
One of the standout aspects of Union VFX’s approach was the conscious effort to bridge the look and feel of the original Season 4 Callister episode while modernising its visual language. “We were always referencing back,” said Hayen. “Even small things like lens flares had to match. The original had a slightly vintage aesthetic; this one leaned more into a stylised video game feel.”
With Framestore having crafted the original ship assets, Union VFX inherited and upgraded these elements. “We reused the Calister’s CG model and textures from Framestore,” Schneider explained. “But a big chunk of our work was new, especially around the ‘Heart of Infinity,’ which didn’t exist before and needed to evolve into something iconic.”
Building the Heart of Infinity and Space Combat
At the narrativecore of the episode is the Heart of Infinity, a massive, ominous space structure concealing a digital ghost. Union VFX collaborated closely with director Toby Haynes and Black Mirror creator Charlie Brooker on its concept. The final asset, which is a gyroscopic megastructure secretly built from computer parts, underwent several iterations.
“You only really notice it’s made of CPUs and circuit boards when you get close,” said Schneider. “We had three levels of detail, from wide establishing shots down to intense dogfighting sequences, where you see ships weaving through CPU rings.” Previsualization played a key role but the sequences still remained fluid. “Charlie and Toby are very iterative,” Schneider noted. “Some previs translated straight to screen. Other scenes evolved considerably, especially once the action choreography changed.”
LED Walls, Cockpits, and Interactive Light
Set lighting was another carefully managed component, especially for cockpit shots. “The bridge had a large LED wall behind the viewport,” Hayen said. Union VFX contributed pre-rendered loops of space backdrops, hyperspace tunnels, and planets, offering real-time interactivity for both lighting and actor eyelines. “Those LED plates ended up as final pixels in many shots,” Schneider added. “But where designs weren’t finalized, we reverted to green screens and post-comp.” To sell realism, cockpit sets were built practically with motion rigs and rotating light arms. “You really see the lighting interact with the cast’s faces,” said Hayen. “That kind of contact lighting sells it far better than trying to fake everything in post.”
Controlling the LED wall with an iPAD
Stylized Teleportation and Video Game Visuals
The entire show leans heavily into video game aesthetics, not just in narrative but also in design. “The teleporting effect was intentionally layered,” said Schneider. “We started with chunky voxel blocks building up, refining into wireframe, then finally resolving into the real actor’s plate.” That low-to-high-res visual progression mimics game asset loading and nods to gaming history, a detail the team enjoyed threading through. In space battles, color-coded neon strips on ships offered a “Tron-meets-retro” visual shorthand. “You always knew which character was flying which ship,” said Hayen. “It really helped storytelling.”
Tools of the Trade
Union’s 3D pipeline ran on Arnold, with compositing handled in Nuke. “We leaned heavily on Optical Flares for all the over-the-top lensing,” said Hayen. “It’s a compositor’s dream.” While The Foundry’s CopyCat machine learning tool is gaining traction for rotoscoping and cleanup, it was used minimally here. “We’ve used it on other shows,” said Schneider. “But USS Callister needed more bespoke solutions.”
High Expectations, Met With Precision
Following the acclaimed first Black Mirror episode, Union VFX was keenly aware of the scrutiny this sequel would attract. “Framestore’s work was brilliant,” said Schneider. “We had to meet, if not exceed, the visual standard they set—especially with tech evolving in the years since.” Fortunately, the combination of intricate design, strong creative collaboration, and technical precision delivered an episode that both honours its predecessor and pushes the story forward. “It was a real joy to work on,” said Hayen. “And yes, a bit of a dream job too.”
#fxpodcast #union #vfxs #work #blackfxpodcast: Union VFX’s work on Black Mirror Season 7 – USS CallisterIn this episode of the fxpodcast, we speak with David Schneider, senior DFX supervisor, and Jane Hayen, 2D supervisor from Union VFX, to discuss their extensive visual effects work on the USS Callister episode for Black Mirror season 7. From spacecraft interiors to stylised teleportation and frenetic dogfights in deep space, the duo outlined how their team brought The USS Callister back with upgraded tech, intricate referencing to the original episode, and a keen eye for the audience’s expectations. In addition to the audio podcast, we’ve been creating YouTube videos of many of our fxpodcast episodes and this one is available in video form as well. Returning to the world of Black Mirror’s iconic “USS Callister,” the latest Season 7 installment pushes visual storytelling into new territory. With over 215 shots, 50 of which are fully CGI. Union VFX stepped in to bring the pixel-perfect dystopia to life. The episode features some of the studio’s most ambitious work to date, blending nostalgic retro-futurism with cutting-edge CG in a cinematic-scale production that feels more like a feature film than episodic television. Referencing the Original While Evolving the Visual Language One of the standout aspects of Union VFX’s approach was the conscious effort to bridge the look and feel of the original Season 4 Callister episode while modernising its visual language. “We were always referencing back,” said Hayen. “Even small things like lens flares had to match. The original had a slightly vintage aesthetic; this one leaned more into a stylised video game feel.” With Framestore having crafted the original ship assets, Union VFX inherited and upgraded these elements. “We reused the Calister’s CG model and textures from Framestore,” Schneider explained. “But a big chunk of our work was new, especially around the ‘Heart of Infinity,’ which didn’t exist before and needed to evolve into something iconic.” Building the Heart of Infinity and Space Combat At the narrativecore of the episode is the Heart of Infinity, a massive, ominous space structure concealing a digital ghost. Union VFX collaborated closely with director Toby Haynes and Black Mirror creator Charlie Brooker on its concept. The final asset, which is a gyroscopic megastructure secretly built from computer parts, underwent several iterations. “You only really notice it’s made of CPUs and circuit boards when you get close,” said Schneider. “We had three levels of detail, from wide establishing shots down to intense dogfighting sequences, where you see ships weaving through CPU rings.” Previsualization played a key role but the sequences still remained fluid. “Charlie and Toby are very iterative,” Schneider noted. “Some previs translated straight to screen. Other scenes evolved considerably, especially once the action choreography changed.” LED Walls, Cockpits, and Interactive Light Set lighting was another carefully managed component, especially for cockpit shots. “The bridge had a large LED wall behind the viewport,” Hayen said. Union VFX contributed pre-rendered loops of space backdrops, hyperspace tunnels, and planets, offering real-time interactivity for both lighting and actor eyelines. “Those LED plates ended up as final pixels in many shots,” Schneider added. “But where designs weren’t finalized, we reverted to green screens and post-comp.” To sell realism, cockpit sets were built practically with motion rigs and rotating light arms. “You really see the lighting interact with the cast’s faces,” said Hayen. “That kind of contact lighting sells it far better than trying to fake everything in post.” Controlling the LED wall with an iPAD Stylized Teleportation and Video Game Visuals The entire show leans heavily into video game aesthetics, not just in narrative but also in design. “The teleporting effect was intentionally layered,” said Schneider. “We started with chunky voxel blocks building up, refining into wireframe, then finally resolving into the real actor’s plate.” That low-to-high-res visual progression mimics game asset loading and nods to gaming history, a detail the team enjoyed threading through. In space battles, color-coded neon strips on ships offered a “Tron-meets-retro” visual shorthand. “You always knew which character was flying which ship,” said Hayen. “It really helped storytelling.” Tools of the Trade Union’s 3D pipeline ran on Arnold, with compositing handled in Nuke. “We leaned heavily on Optical Flares for all the over-the-top lensing,” said Hayen. “It’s a compositor’s dream.” While The Foundry’s CopyCat machine learning tool is gaining traction for rotoscoping and cleanup, it was used minimally here. “We’ve used it on other shows,” said Schneider. “But USS Callister needed more bespoke solutions.” High Expectations, Met With Precision Following the acclaimed first Black Mirror episode, Union VFX was keenly aware of the scrutiny this sequel would attract. “Framestore’s work was brilliant,” said Schneider. “We had to meet, if not exceed, the visual standard they set—especially with tech evolving in the years since.” Fortunately, the combination of intricate design, strong creative collaboration, and technical precision delivered an episode that both honours its predecessor and pushes the story forward. “It was a real joy to work on,” said Hayen. “And yes, a bit of a dream job too.” #fxpodcast #union #vfxs #work #blackWWW.FXGUIDE.COMfxpodcast: Union VFX’s work on Black Mirror Season 7 – USS CallisterIn this episode of the fxpodcast, we speak with David Schneider, senior DFX supervisor, and Jane Hayen, 2D supervisor from Union VFX, to discuss their extensive visual effects work on the USS Callister episode for Black Mirror season 7. From spacecraft interiors to stylised teleportation and frenetic dogfights in deep space, the duo outlined how their team brought The USS Callister back with upgraded tech, intricate referencing to the original episode, and a keen eye for the audience’s expectations. In addition to the audio podcast, we’ve been creating YouTube videos of many of our fxpodcast episodes and this one is available in video form as well. Returning to the world of Black Mirror’s iconic “USS Callister,” the latest Season 7 installment pushes visual storytelling into new territory. With over 215 shots, 50 of which are fully CGI. Union VFX stepped in to bring the pixel-perfect dystopia to life. The episode features some of the studio’s most ambitious work to date, blending nostalgic retro-futurism with cutting-edge CG in a cinematic-scale production that feels more like a feature film than episodic television. Referencing the Original While Evolving the Visual Language One of the standout aspects of Union VFX’s approach was the conscious effort to bridge the look and feel of the original Season 4 Callister episode while modernising its visual language. “We were always referencing back,” said Hayen. “Even small things like lens flares had to match. The original had a slightly vintage aesthetic; this one leaned more into a stylised video game feel.” With Framestore having crafted the original ship assets, Union VFX inherited and upgraded these elements. “We reused the Calister’s CG model and textures from Framestore,” Schneider explained. “But a big chunk of our work was new, especially around the ‘Heart of Infinity,’ which didn’t exist before and needed to evolve into something iconic.” Building the Heart of Infinity and Space Combat At the narrative (and logical) core of the episode is the Heart of Infinity, a massive, ominous space structure concealing a digital ghost. Union VFX collaborated closely with director Toby Haynes and Black Mirror creator Charlie Brooker on its concept. The final asset, which is a gyroscopic megastructure secretly built from computer parts, underwent several iterations. “You only really notice it’s made of CPUs and circuit boards when you get close,” said Schneider. “We had three levels of detail, from wide establishing shots down to intense dogfighting sequences, where you see ships weaving through CPU rings.” Previsualization played a key role but the sequences still remained fluid. “Charlie and Toby are very iterative,” Schneider noted. “Some previs translated straight to screen. Other scenes evolved considerably, especially once the action choreography changed.” LED Walls, Cockpits, and Interactive Light Set lighting was another carefully managed component, especially for cockpit shots. “The bridge had a large LED wall behind the viewport,” Hayen said. Union VFX contributed pre-rendered loops of space backdrops, hyperspace tunnels, and planets, offering real-time interactivity for both lighting and actor eyelines. “Those LED plates ended up as final pixels in many shots,” Schneider added. “But where designs weren’t finalized, we reverted to green screens and post-comp.” To sell realism, cockpit sets were built practically with motion rigs and rotating light arms. “You really see the lighting interact with the cast’s faces,” said Hayen. “That kind of contact lighting sells it far better than trying to fake everything in post.” Controlling the LED wall with an iPAD Stylized Teleportation and Video Game Visuals The entire show leans heavily into video game aesthetics, not just in narrative but also in design. “The teleporting effect was intentionally layered,” said Schneider. “We started with chunky voxel blocks building up, refining into wireframe, then finally resolving into the real actor’s plate.” That low-to-high-res visual progression mimics game asset loading and nods to gaming history, a detail the team enjoyed threading through. In space battles, color-coded neon strips on ships offered a “Tron-meets-retro” visual shorthand. “You always knew which character was flying which ship,” said Hayen. “It really helped storytelling.” Tools of the Trade Union’s 3D pipeline ran on Arnold, with compositing handled in Nuke. “We leaned heavily on Optical Flares for all the over-the-top lensing,” said Hayen. “It’s a compositor’s dream.” While The Foundry’s CopyCat machine learning tool is gaining traction for rotoscoping and cleanup, it was used minimally here. “We’ve used it on other shows,” said Schneider. “But USS Callister needed more bespoke solutions.” High Expectations, Met With Precision Following the acclaimed first Black Mirror episode, Union VFX was keenly aware of the scrutiny this sequel would attract. “Framestore’s work was brilliant,” said Schneider. “We had to meet, if not exceed, the visual standard they set—especially with tech evolving in the years since.” Fortunately, the combination of intricate design, strong creative collaboration, and technical precision delivered an episode that both honours its predecessor and pushes the story forward. “It was a real joy to work on,” said Hayen. “And yes, a bit of a dream job too.”0 Comentários 0 Compartilhamentos -
VFXShow 295: Thunderbolts*
Thunderbolts* is the 36th film in the Marvel Cinematic Universe. The film was directed by Jake Schreier and stars an ensemble cast featuring Florence Pugh, Sebastian Stan, Wyatt Russell, Olga Kurylenko, Lewis Pullman, Geraldine Viswanathan, Chris Bauer, Wendell Pierce, David Harbour, Hannah John-Kamen, and Julia Louis-Dreyfus. In the film, a group of antiheroes are caught in a deadly trap and forced to work together on a dangerous mission.
Don’t forget to subscribe to both the VFXShow and the fxpodcast to get both of our most popular podcasts.
This week in our lineup is:
Matt Wallin * @mattwallin www.mattwallin.com
Follow Matt on Mastodon: @Jason Diamond @jasondiamond www.thediamondbros.com
Mike Seymour @mikeseymour www.fxguide.com. + @mikeseymour
Jason referenced his Tribeca film: How Dark My Love
Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
#vfxshow #thunderboltsVFXShow 295: Thunderbolts*Thunderbolts* is the 36th film in the Marvel Cinematic Universe. The film was directed by Jake Schreier and stars an ensemble cast featuring Florence Pugh, Sebastian Stan, Wyatt Russell, Olga Kurylenko, Lewis Pullman, Geraldine Viswanathan, Chris Bauer, Wendell Pierce, David Harbour, Hannah John-Kamen, and Julia Louis-Dreyfus. In the film, a group of antiheroes are caught in a deadly trap and forced to work together on a dangerous mission. Don’t forget to subscribe to both the VFXShow and the fxpodcast to get both of our most popular podcasts. This week in our lineup is: Matt Wallin * @mattwallin www.mattwallin.com Follow Matt on Mastodon: @Jason Diamond @jasondiamond www.thediamondbros.com Mike Seymour @mikeseymour www.fxguide.com. + @mikeseymour Jason referenced his Tribeca film: How Dark My Love Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen. #vfxshow #thunderboltsWWW.FXGUIDE.COMVFXShow 295: Thunderbolts*Thunderbolts* is the 36th film in the Marvel Cinematic Universe (MCU). The film was directed by Jake Schreier and stars an ensemble cast featuring Florence Pugh, Sebastian Stan, Wyatt Russell, Olga Kurylenko, Lewis Pullman, Geraldine Viswanathan, Chris Bauer, Wendell Pierce, David Harbour, Hannah John-Kamen, and Julia Louis-Dreyfus. In the film, a group of antiheroes are caught in a deadly trap and forced to work together on a dangerous mission. Don’t forget to subscribe to both the VFXShow and the fxpodcast to get both of our most popular podcasts. This week in our lineup is: Matt Wallin * @mattwallin www.mattwallin.com Follow Matt on Mastodon: @[email protected] Jason Diamond @jasondiamond www.thediamondbros.com Mike Seymour @mikeseymour www.fxguide.com. + @mikeseymour Jason referenced his Tribeca film: How Dark My Love Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.0 Comentários 0 Compartilhamentos -
WWW.FXGUIDE.COMPolitics meets pixels: business implications of a possible 100% film tariffIn this fxpodcast episode, we speak with Joseph Bell, founder of HDRI Intelligence, during FMX 2025 in Stuttgart. Bell, a veteran of companies like ILM and The Mill, brings his experience in production and business intelligence to unpack one of the most controversial topics currently shaking the film and VFX industry: the proposed 100% tariff on films made outside the United States. U.S. President Donald Trump’s Truth Social post proposing the tariff caught the industry off guard, and while many question the policy’s feasibility, the resulting uncertainty is already having a chilling effect on global production planning. As you can hear in this episode of the fxpodcast, Bell stresses that although implementation may be unclear, major studios will likely slow or pause projects while assessing potential financial and legal risks. The impact of that delay will ripple through international production hubs, including Australia, Canada, and the United Kingdom. We tried to move our discussion beyond speculation to hard data. Bell explains that just a handful of U.S.-based companies control the majority of global content spend, including Disney, Netflix, Warner Bros Discovery, Amazon, Apple, and Paramount (and possibly Sony). This concentration means that even small policy shifts can have massive consequences for global employment and investment in visual effects and animation. Bell also touches on the complexity of applying tariffs to digital services rather than physical goods. Unlike cars or steel, a digital film is not easily defined by geography. Our conversation explores the legal and logistical challenges of taxing streamed content, outsourced VFX, and international co-productions in an era where bits cross borders instantly and post-production teams are often distributed across multiple continents. Trump’s plan, it seems, according to Deadline, was a reaction to Jon Voight’s ‘Plan To Save Hollywood’, which he presented to the President (after talks with unions, government officials and executives). (“Sylvester Stallone, Jon Voight and Mel Gibson are special ambassadors for the President”). This plan outlines a blueprint that includes:· A 10–20% federal tax credit stacked on top of existing state incentives. A cultural content test to determine American authenticity. A requirement that 75% of production and post occurs within the U.S., using U.S. labour. A return to the shuttered Financial Interest and Syndication Rule – Studios cant ‘own their shows – many streamers now require 100% ownership). Tax treaties starting with countries such as the UK – allowing studios to claim incentives in both countries, but without double dipping. Expanded application to all content: theatrical, TV, streaming, and even digital platforms like YouTube, Facebook, and X. A 120% tariff on foreign tax incentives. (If an oversea’s tax incentive pays a US production, eg. $10M offset means, a tariff of $12M). Bell believes that such incentives may mirror systems used in Canada and the UK, but he warns that poorly designed conditions can result in unintended consequences and inefficiencies. Whether the tariff is ultimately enacted or not, it is clear that its effects are already being felt, and that the VFX industry must face new challenges and be informed, adaptable, and collaborative in the face of these new, growing geopolitical issues. Screenshot Many small VFX studios are struggling, but it is worth noting Bell believes that while the largest companies in the VFX space, (those employing over one thousand people), are actually maintaining or even growing their headcount currently, with the exception of MPC and its release companies. This, Bell argues, contradicts narratives that AI and automation are currently rapidly replacing artists. Instead, the data shows that successful studios are those capable of delivering large-scale, consistent, high-volume output. Bell also highlights that these companies are globally distributed, reinforcing the idea that modern filmmaking is inherently international. Joseph Bell will be presenting his FMX 2025 talk Friday at 11.15am. Listeners can access Joseph Bell’s full FMX 2025 talk through the FMX digital pass. The talk provides a broader overview of the visual effects atlas project, international production trends, and employment patterns in a post-strike, post-COVID landscape.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMVFXShow 294: Black Mirror + the (Mad) Life of the Sci-Tech OscarsThis week we diving deep into the mind-bending brilliance of Black Mirror Season 7—a season that doesn’t just reflect our tech fears but amplifies them. From walk-in pictures to outer space, it’s a VFX showcase with something to say. Plus: we’re taking you behind the velvet rope at the 2025 Sci-Tech Oscars, where innovation meets recognition. What tech stole the spotlight? Jason Diamond tells all. Jason was on the selection committee this year. We discuss Black Mirror Season 7, focusing on three Eps in particular. Episode 1: Common People. When a medical emergency leaves schoolteacher Amanda fighting for her life, desperate husband Mike signs her up for Rivermind, a high-tech system that will keep her alive – but at a cost. Episode 6: Eulogy. An innovative system that enables users to literally step into photographic memories of the past leads a lonely man to re-examine a heartbreaking period in his life. Episode 7: USS Callister: Into Infinity. Robert Daly is dead, but the crew of the USS Callister – led by Captain Nanette Cole – find that their problems are just beginning. The Sci-tech 2025 Awards Where dystopian fiction meets real-world tech—and both get graded by the Academy. And the special mention that Jason made of “mad scientist” Neeme Vaino for the development of Fireskin360 Naked Burn Gel. (3:25 sec in) Dont forget to subscribe to both the VFXShow and the fxpodcast to get both of our most popular podcasts. This week in our lineup is: Matt Wallin * @mattwallin www.mattwallin.com. Follow Matt on Mastodon: @[email protected] Jason Diamond @jasondiamond www.thediamondbros.com Mike Seymour @mikeseymour. www.fxguide.com. + @mikeseymour Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMDaredevil: Born Again : The Art and Craft of Critical VFX CollaborationIn Marvel Television’s Daredevil: Born Again, Matt Murdock, a blind lawyer with heightened abilities, is fighting for justice through his bustling law firm, while former mob boss Wilson Fisk pursues his own political endeavors as the new Major of New York. When their past identities begin to emerge, both men find themselves on an inevitable collision course. (L-R) Matt Murdock/Daredevil (Charlie Cox) and Director Michael Cuesta on the set The showrunner and executive producer is Dario Scardapane, who was writer and executive producer of The Punisher. Matt Murdock is again played by Charlie Cox and Vincent D’Onofrio returns as Daredevil’s longtime nemesis Wilson Fisk aka Kingpin. (L-R) Arty Froushan, Director Justin Benson, Vincent D’Onofrio and Director Aaron Moorehead on the set. Photo by Giovanni Rufino. © 2025 MARVEL. To bring the visceral and graphic tone of the action sequences to life in “Daredevil: Born Again,” the filmmakers turned to a powerful VFX team and legendary stunt coordinator Philip Silvera. Charlie Cox reunited with Silvera, as he had served as stunt and fight coordinator on the original series. “The beautiful thing about this show is its rich in its characters, and every time we design one of the stunt and fight sequences, it has to be relative to the characters in the moment and how they would react,” says Silvera. “Once we understand the stakes and the emotions the characters are carrying into a scene, that’s where we begin to design our sequences.” Charlie Cox has worked with Silvera for many years. “He’s one of the greatest of all time,” says Cox. “I never get bored watching him create these iconic scenes that the fans are going to go crazy about. He never loses sight of how it has to be so much more than just punches and kicks when you’re telling a story through a stunt sequence. Every beat has to be specific to those characters and why they are in physical conflict.” To complete the fight sequences the physical stunt team worked with the VFX team to bring the complex scenes to life. fxguide spoke to the VFX team, Fahed “Freddy” Alhabib, VFX producer and Gong Myung Lee, the Production Visual Effects Supervisor. FXGUIDE: Congrats on the show, I am sure a second season will be even more impressive. VFX Team: Hi Mike, thanks so much for having us. I’ve been a fan of your show, so it’s truly an honor to contribute. (L-R) Frank Castle/The Punisher (Jon Bernthal) and Matt Murdock/Daredevil (Charlie Cox) in DAREDEVIL: BORN AGAIN. FXGUIDE: Given the visually dark nature and cinematography of the original series, were you finding yourself balancing letting VFX fall into the blacks? I assume the plate photography was shot quite dark, versus being graded down. For example, the complex fight scenes in Episode 6 with Muse, etc, – which I assume had some digital enhancement and rig removal? Similarly, the rooftop fight in Episode 1? VFX Team: As grounded as Daredevil: Born Again is, the photography wasn’t pushed into extreme darkness on set. Our DP, Hillary Spera, crafted the lighting with intentional atmosphere and shaped depth — a distinct look compared to the original Netflix series. The goal wasn’t to underexpose, but to create rich, dimensional plates that gave us flexibility later. Shot on Alexa 35 cameras with custom-tweaked anamorphic lenses, the footage captured a wide dynamic range, excellent shadow detail, and strong tonal separation — which gave us room to adjust in DI as needed, and provided VFX with greater latitude for integration across varying lighting conditions. From a VFX standpoint, every element had to match the original photography before DI — ensuring that no matter how far the final image was pushed in the grade, it would blend invisibly. A big part of that meant capturing technical data early: lens grids, flare and bokeh libraries, LiDAR scans, photogrammetry, HDRI captures, clean plates, and element photography. For heavier VFX shots, we layered in CG atmosphere, 2D haze, and light diffusion to help the work sit naturally into the blacks and hold up through the final grade. (L-R) Wilson Bethel and Charlie Cox on the set On a show like this, success in VFX comes when you can’t tell where the practical ends and the digital begins. That philosophy was critical for Daredevil: Born Again. The VFX mandate was simple but strict: grounded, analog, invisible. Daredevil isn’t a superhero with flashy powers — he bleeds, he suffers — and our visual effects had to support that reality without pulling the audience out. Whether it was digital blood, CG weapons, digidouble enhancements, or environment extensions, everything had to be physically plausible and embedded into the action. Gore was pushed to a 10, but it always had to stay realistic. We worked closely with our stunt team, led by Action Director and Supervising Stunt Coordinator Philip Silvera, and with our SFX team, led by SFX Coordinator Roy Savoy. As much as possible, we built the foundation practically. Even when SFX elements weren’t in the final take, we referenced them for physical accuracy. All digidouble animation was driven by motion capture from the stunt team, and all environment extensions were grounded in real-world photography and LiDAR scans. Muse Episode 6 — the fight between Muse and Daredevil, intercut with Fisk and Adam — is a great example. It’s one of my favorite action sequences of the season. The Muse fight was extremely practical at its core, but heavily supplemented with seamless VFX: rig removals, CG weapon takeovers and extensions, blood hits, debris, impact dust, atmosphere additions, and even some full-CG shots — like Daredevil running through the tunnel. Motion blur, lens flares, depth of field, and diffusion were critical tools we used to take the edge off the CG and blend the line between real and digital. Similarly, the rooftop fight in Episode 1, which was part of a larger “oner” action sequence starting with the crash into Josie’s Bar, was one of our most technically challenging sequences. We used CG digidoubles strategically for multiple camera stitches: from Bullseye (BE) coming up the stairs, to Daredevil’s jump from the shed onto the rooftop, to the rooftop fight itself, which led directly into the close-up practical performances of Bullseye and Daredevil fighting at the rooftop ledge. Phillip Silvera and Charlie Cox on the set. © 2025 MARVEL. We shot across three different locations — the bar, the stairway, and a bluescreen rooftop stage — and stitching those together required a blend of 2.5D projections and 3D techniques, including full CG hallway and stairwell builds driven by mocap and key frame animation. On the rooftop set, we carefully matched our CG extensions to the practical lighting on stage and added additional CG rooftop lights to justify in-camera lens flares. The final tilt down to the street involved replacing the entire building facade with CG and using a 2.5D projection of Foggy’s plate to complete the move. FXGUIDE: Can you discuss the environment work in and around the exteriors of Hell’s Kitchen, please? How much was set extensions and how much is there just sign removal and similar Digital Matte Painting (DMP)? VFX Team: All the Hell’s Kitchen rooftop environments in Episode 1 and Matt’s apartment rooftop in Episode 3 were full CG set extensions. We shot those scenes on stage, using a practical rooftop build against chroma blue backings. For these builds, we scouted real rooftops in Hell’s Kitchen and Williamsburg with our creatives, including our Production Designer and DP. Once we identified the desired locations, we captured tiled photography, panoramas, HDRI at different times of day, and LiDAR scans of hero buildings to feature in the foreground and midground. When it came to layout, we took some liberties to dress the scenes for camera while staying true to the character of New York — adding signature skylights, water towers, antennas, and other rooftop details. Rise, Folks, and Ghost VFX all did a fantastic job enriching the environment builds, even adding dimensional interiors behind windows and people to keep the world feeling alive. Matt Murdock/Daredevil (Charlie Cox) For the broader NYC environments shot practically, it was a close collaboration with our Art Department. Where we couldn’t dress practically, VFX stepped in to remove signage, graffiti, or other undesired elements. When Muse’s murals were part of the storytelling, the smaller installations were practical builds, while the larger, skyline-dominating pieces were created digitally. We also had a handful of drone overhead shots. In Episode 1, for example, we added VFX fog and atmosphere to the skyline to match the sequence’s tone and climate. For the blackout scenes in Episode 9, we took a hybrid approach: capturing daytime and nighttime passes for each drone shot in the same location so we could composite the action of lights shutting off — and later turning back on — in 2.5D comp. (L-R) Daredevil/Matt Murdock (Charlie Cox) and Devlin (Cillian O’Sullivan) FXGUIDE: Was the complex combat choreography done in camera or was there much digi-double work? It has been said that the way that Matt Murdock fights in this show is very different from that of Netflix’s Daredevil. Can you comment? VFX Team: The goal for all combat choreography on Daredevil: Born Again was to capture as much as possible in-camera. Wherever something couldn’t be achieved practically — whether for logistics, safety, scheduling, or editorial changes — VFX stepped in to support. The choreography was fully stuntviz’d in camera by our Stunt Team. This allowed us to map out what could be accomplished practically, what would require rig or pad removals, and where digital takeovers or separate bluescreen elements would be needed in post. Most of the fights you see are fully practical at their core, with VFX used to enhance and extend — adding CG weapons, blood hits, wounds, or environment interactions. We worked closely with SFX to ground the impacts as much as possible: SFX created practical breakaway effects for bone breaks under clothing, which we then enhanced digitally when needed, and blood squibs and paks were used practically, with VFX sometimes augmenting or extending the effects and cleaning up visible rigs. Whenever digidouble work was needed, we shot motion capture specifically for the action — from simple standing, walks, and runs to intricate baton combat — to ground everything in real character movement. Charlie Cox There were a few sequences where digi-doubles played a bigger role. In Episode 1, during Daredevil’s rooftop run leading into the “oner” sequence, we used a fair amount of digidouble work to help stitch complex action together. In Episode 2, during Hector’s train fight, we used a digidouble for the man getting hit by the train. In Episode 6, for the Muse versus Daredevil fight, there’s a moment where Daredevil sprints down the tunnel to punch Muse — that was a digidouble takeover. And in Episode 7, during the fight in Heather’s office, we used digi-double proxies to enhance the blood interaction and costume fixes during the close-quarters fight. As for how Matt Murdock’s fighting style differs in Born Again compared to the Netflix series — I can’t speak to any specific remarks, but there’s definitely an intentional creative arc this season. Early in the story, Matt is suppressing the Daredevil side of himself, trying to maintain control and protect the people around him. As the events escalate, and especially by Episode 6, you see a shift — the power, violence, and emotional weight behind the fighting all start to intensify alongside the character’s internal struggle. The level of gore, destruction, and sheer force amplifies with it. And of course, having Philip Silvera back to choreograph the action — someone who helped define the fight language of the original series — brought continuity to the DNA of the character, even as the tone and emotional drive evolved. Matt Murdock/Daredevil (Charlie Cox) FXGUIDE: The show is presented in scope format (2.39:1) — but I assume it was shot 16:9, allowing for reframing and additional tracking information? VFX Team: This was a fully anamorphic show, shot Open Gate on the Alexa 35 and framed for 2.39 extraction. With a 2:1 lens squeeze, we allowed for roughly a 6% safe area — utilizing 93.75% of the sensor’s full height — to enable minor stabilization and repositioning in post. In most cases, we were committed to what we captured in camera. While the majority of the show was photographed this way, we used a different setup for our “grande sensory” moments — switching to a three-camera rig to create a distinct visual language around Daredevil’s heightened senses. These sequences were still shot Open Gate on the Alexa 35, but used a central spherical zoom lens flanked by two spherical primes. The creative idea was that when Daredevil began to sense something, we would dolly in while simultaneously zooming out — creating a sense of heightened perception. Directors Justin Benson and Aaron Moorhead also wanted to explore shifting aspect ratios to represent how Matt’s sensory awareness expanded and contracted. Shooting spherical while still framing for 2.39 gave us more vertical headroom and flexibility in post. At the widest focal length, we stitched in the flanking cameras to visually expand his perception — showing that he’s absorbing the world around him. Once he locked onto a specific sound, we would “unwrap” back to the central zoom and continue a slow zoom into the subject. Simultaneously, the aspect ratio would subtly widen vertically; and as his focus narrowed, it would compress back in — a transition completed in VFX. The result was both cinematic and grounded — subtle, tactile feeling, and designed to play as though it were captured entirely in camera. (L-R) Director Jeffrey Nachmanoff and CInematographer Pedro Gomez Millan on the set FXGUIDE: When did you first get involved with the show? Was Season 2 always in your planning and if so did that change any of your VFX planning or approaches? VFX Team: I started on the show in November 2022, right at the beginning of pre-production. Originally, it was planned as one long season, but partway through the process, it was decided to proceed with nine episodes for Season 1. During post, we got the news that there would be a separate Season 2, and it was exciting to know we’d get the chance to keep building on what we had started. Now, we’re already midway through shooting Season 2, and it’s been incredibly exciting to push the world even further — evolving the storytelling, expanding the scope, and building on the visual language we established in Season 1. Matt Murdock/Daredevil (Charlie Cox) in Marvel Television’s DAREDEVIL: BORN AGAIN FXGUIDE: What was the hardest aspect of the series and how did you address it? VFX Team: As much as I love the grounded nature of the show, there were definitely moments I found myself wishing we had a seven-headed dragon — not for the reasons you might think, but because it would have made it easier to justify gathering more data and element plates. On a grounded show, it’s sometimes harder to get the time and resources VFX needs, because the work is supposed to be invisible — but the technical demands are still just as real. Looking back, one of the biggest challenges was simply the production conditions we were working in — dealing with COVID, industry strikes, and the production disruptions that came with them. It impacted schedules, crews, and overall planning. TV moves much faster than film, and the action scenes moved even faster — often leaving very little time for traditional VFX planning. When that happens — when you can’t get enough scanning, lighting references, or coverage — integration becomes trickier and you’re relying on less data, or parsing through too many layers married in camera. That always makes the work harder in post. Despite all the production challenges, we stayed disciplined about capturing as much as we could — every LiDAR scan, cyberscan, clean plate, lighting reference, and practical element we gathered during Season 1 made a huge difference. It gave us the foundation we needed to integrate VFX properly and maintain the level of quality we were aiming for. The biggest thing we learned is that when VFX is involved early — collaborating closely with Stunts, SFX, Art Department, and Camera — that’s when we do our best work. Season 1 taught us a lot, and heading into Season 2, that collaboration and shorthand between departments has gotten even stronger. Already, you can feel the difference — the collaboration is tighter, the planning is sharper, and we’re excited to take the VFX even further in Season 2. (L-R) Matt Murdock/Daredevil (Charlie Cox) and Frank Castle/The Punisher (Jon Bernthal) FXGUIDE: Can you confirm the VFX houses? VFX Team: There were a lot of VFX houses involved across Daredevil: Born Again, and it was a true team effort at every level. The Third Floor – Sophia Yu, Alyssa Knittel RISE Visual Effects Studios – Stuart Bullen, Alex Twigg, Roy Hoes, Lara Lom Folks VFX – Phil Prates, Tanya Haddad Ghost VFX – Jessica Norman, Maria Giron, Julie Jepsen Phosphene – Aaron Raff, Vivian Connolly, Steven Weigle, Matt Griffin Powerhouse VFX – Ed Mendez, Dan Bornstein, Adrienne McNeary Soho VFX – Berj Bannayan, Keith Sellers, Kelly McCarthy Anibrain – Ajay Patel, Gagan Nigam, Dnyandeep Gautam Pundkar, Neeraj Singh, Sumit Mukherjee, Pradeepkumar Vadisherla, Divesh Vijay Tupat, Jesh Krishna Murthy, Saurabh Dalmiya, Marc de Sousa BASE FX – Jared Sandrew, Shad Davis, Sun Xiaodan SDFX Studios – Alex Guri, Mark Simone Cantina Creative – Aaron Eaton, Donna Cullen Scanable – Travis Reinke, Pasquale Greco Lola VFX – Edson Williams, Jack Dorst, Will Anderson Dark Red Studios – Olney Atwell In-house VFX Team – Mat EllinIt was an incredible collaboration across so many talented artists and teams, and the final result really reflects the strength, creativity, and dedication everyone brought to the table. FXGUIDE: Thanks so much Matt Murdock/Daredevil (Charlie Cox) in Marvel Television’s DAREDEVIL: BORN AGAIN, exclusively on Disney+. Photo courtesy of Marvel Television. © 2025 MARVEL. With Daredevil: Born Again, Marvel has not only redefined the visual and emotional language of the character, but also elevated the standards for grounded, gritty character-driven VFX on television. The seamless blend of in-camera stunts, atmospheric cinematography, and surgically integrated digital effects creates a world that feels as harsh and unforgiving as Hell’s Kitchen itself. Every punch, every shadow, every rooftop leap carries weight — not just physically, but narratively. As Season 2 unfolds, we expect the VFX team to double down on what made Season 1 so impactful: invisible craft in service of visceral storytelling.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMVFX Dying to Work with Mickey 17Mickey 17 is a black comedy film written, produced, and directed by Bong Joon Ho, based on the 2022 novel Mickey7 by Edward Ashton. The film stars Robert Pattinson in the title role, alongside Naomi Ackie, Steven Yeun, Toni Collette, and Mark Ruffalo. Set in the year 2054, the plot follows a man who joins a space colony as an “Expendable”, a disposable worker who gets cloned every time he dies. The lead VFX house was DNEG, which worked alongside Framestore, to bring the film to ‘life’. In Mickey 17, the fusion of dark comedy and science fiction is brought to life through the meticulous efforts of DNEG’s visual effects team. VFX Supervisor Chris McLaughlin and Animation Director Robyn Luckham played pivotal roles in translating the film’s complex narrative and unique aesthetic into a visual spectacle as you can hear in this week’s fxpodcast. Chris McLaughlin, overseeing the visual effects, emphasized the collaborative nature of the project. The team faced the challenge of creating the icy planet, ensuring that the environment felt both otherworldly and tangible. The integration of live-action elements with CGI required precise coordination, particularly in scenes involving the “Expendable” clones, where Robert Pattinson’s character undergoes multiple regenerations. McLaughlin noted that achieving seamless transitions between practical effects and digital enhancements was crucial in maintaining the film’s immersive quality. Robyn Luckham, leading the animation department, focused on the characterization of the film’s unique creatures, notably the “Creepers.” Drawing inspiration from Bong’s vision, the animation team developed creatures that were both unsettling and endearing. Luckham highlighted the importance of motion capture technology in capturing nuanced performances, allowing the creatures to exhibit a range of emotions that resonated with audiences. The animation team’s attention to detail ensured that each creature’s movement contributed to the film’s narrative depth. The collaboration between McLaughlin and Luckham exemplifies the synergy required in modern filmmaking, where visual effects and animation converge to support storytelling. Their combined efforts resulted in a film that not only showcases technical prowess but also enhances the thematic elements central to Bong Joon Ho’s storytelling. Mickey 17 offers audiences a richly textured and darn funny film.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMBehind the Plasticine: Crafting Wallace & Gromit: Vengeance Most FowlThere is a certain alchemy at work in any Aardman production — that particular blend of humour, heart, and handcrafted charm — but Vengeance Most Fowl, the latest Wallace & Gromit adventure, reveals just how seamlessly that magic is now augmented by digital visual effects. At the heart of this latest outing is Wallace’s latest well-meaning but misguided invention: a “smart” garden gnome that quickly becomes rather more sentient than anyone might have hoped. For Gromit — ever the long-suffering but loyal canine companion — this is confirmation of his worst fears: Wallace’s increasing dependence on technology is, once again, courting disaster. Joining the production as VFX Supervisor was Howard Jones, whose work with Aardman stretches back to Pirates! and Shaun the Sheep. For Vengeance Most Fowl, Jones describes a production philosophy that remains fiercely loyal to Aardman’s roots in stop-frame animation — the puppets, the sets, the hand-made detail — but is also unafraid to deploy digital tools in service of the story, as you can hear in this week’s fxpodcast. As Jones explains to fxguide, every effort was made to shoot practically wherever possible. VFX was never there to replace puppets, but to support them: rig removal, set clean-up, subtle environmental enhancements. And yet, as anyone who has seen the film’s spectacular climax knows, sometimes the demands of scale, spectacle, and sheer comic absurdity require a little more than can be done in clay. For example, the final canal chase, set high atop a vertiginous aqueduct, is a case in point. While much of the world was built practically — including a vast 1/10th scale valley model — the aqueduct itself, and particularly its towering legs when the camera was viewing down from the boats, were digital creations. The challenge wasn’t just technical; it was aesthetic. The visual effects had to blend invisibly with the miniature sets, preserving the illusion of scale while avoiding the trap of making everything feel like a toy. The water, always a tricky element in miniature filmmaking, was handled digitally using Houdini simulations. But here again, restraint was key. The water effects needed to be believable, but not so realistic that they shattered the handcrafted feel of the world. It’s a fine line for the VFX team, too perfect, and the VFX would betray the aesthetic; too crude, and the scene would collapse under its own ambition. Throughout, Jones and his team walked this line with remarkable precision. The result is a sequence that feels epic without ever losing its signature Aardman warmth. Even the explosion that caps the sequence — a gloriously oversized fireball born of Embergen and Houdini simulations — serves not as a moment of gritty realism, but as a perfectly judged punchline to Wallace’s latest misadventure. While the canal chase is the obvious showpiece, the film is packed with quieter, equally intricate work: in-camera focus pulls on miniature sets, digital fog designed to both obscure and reveal, computer screen inserts that carry their own embedded jokes, and dozens of subtle VFX tweaks invisible to all but the keenest viewer. Perhaps the best measure of success is that none of it feels like visual effects. In Vengeance Most Fowl, Aardman’s commitment to story, character, and craft always comes first. The technology is there, but it’s in service to something older: a tradition of storytelling that is as playful as it is precise. STS7Photographer: Richard Davies As Jones himself puts it, “the joy of working on a film like this lies not just in the technical challenges, but in being part of a production where the playfulness on screen mirrors the spirit behind the camera”. And in Wallace & Gromit: Vengeance Most Fowl, that spirit is alive and well — as eccentric, inventive, and unmistakably Aardman as ever.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMNAB 25: a leaner show but with an AI innovation focusThe NAB Show in Las Vegas was long the epicentre of breakthroughs in broadcast, media, and production technology. Once sprawling across the entire Las Vegas Convention Center and hosting over 100,000+ attendees, the show now presents a more focused and intense experience; this year, the Conference is expecting just over 60,000 visitors, but with attendees from 160 countries and filling only part of the South Hall. But while the scale may have shifted, the ambition certainly hasnt.Known for historic firsts like the debut of HDTV and the original unveiling of the RED Digital Cinema Camera, NAB remains the place Where Content Comes to Life. In 2025, the spotlight turns squarely toward innovation in AI, automation, and next-generation production tools. From real-time visual effects to AI-driven editorial workflows, the show reflects an industry rapidly evolving under the weight and potential of machine learning and intelligent automation.Here are some of the most interesting new technologies and tools that are reshaping post-production.Blackmagic DesignIn Las Vegas, Blackmagic Design has unveiled DaVinci Resolve 20, a significant update packed with over 100 new features, many of which are powered by AI. While the NAB show may be smaller this year, Blackmagics booth is showcasing some key transformative tools for post, -and AI is clearly taking the lead. This release brings a host of intelligent features designed to streamline workflows and enhance creative control.DaVinci Resolve 20Among the headline tools is AI IntelliScript, which can automatically generate timelines from a text script, effectively turning written dialogue into structured edits. AI Animated Subtitles adds a dynamic layer to captioning by animating words in sync with spoken audio, while AI Multicam SmartSwitch uses speaker detection to automate camera switching in multicam edits. Audio post also gets an upgrade with the AI Audio Assistant, which analyzes entire timelines and builds a professional mix without manual intervention.All of these features represent a deepening integration of AI into the DaVinci editing and finishing process as a practical assistant across real-world workflows. Beyond AI, DaVinci Resolve 20 introduces a new keyframe editor, voiceover recording palettes, expanded compositing capabilities in Fusion, and enhanced grading tools like Chroma Warp.DaVinci Resolve 20 is available now as a public beta from the Blackmagic Design website. We expect a lot of buzz around the booth this week, especially from those watching how AI is integrating into the editorial and finishing landscape.With the Blackmagic URSA Cine Immersive, the worlds first cinema camera designed to shoot for Apple Immersive Video and Apple Vision Pro (AVP) DaVinci Resolve Studio 20 also introduces powerful new features for Apple Immersive Video. Filmmakers can now edit, colour grade, mix Spatial Audio, and deliver Apple Immersive Video captured using the new Blackmagic URSA Cine Immersive camera.AdobeAt NAB 2025, Adobe has just unveiled powerful new updates to Premiere Pro and After Effects, with a strong emphasis on AI-driven features that enhance creative workflows and make editing smarter and faster.In Premiere Pro 25.2, Adobe introduces Firefly-powered Generative Extend, a tool that allows editors to seamlessly add extra 4K frames to a clip, which is perfect for fixing cuts that start too early or end too soon. Generative Extend works in the background as you continue editing, and it is powered by Adobes Firefly AI. It also importantly includes Content Credentials, providing transparency about when and where AI was used.https://www.fxguide.com/wp-content/uploads/2025/04/Generative-Extend-Sizzle-VideoS.mp4Also debuting is Media Intelligence, an AI-based feature that simplifies finding the right shot. By automatically recognizing content, including objects, environments, camera angles, and dialogue. Editors can now search hours of footage using natural language queries. Editors can search for anything from close-ups of hands working in a kitchen to mentions of herbs and Premiere will surface relevant clips, transcripts, and metadata.As this AI analysis happens locally, theres no need for an internet connection, and Adobe emphasizes that no user content is ever used to train its models. Other workflow upgrades in Premiere include dynamic waveforms, improved colour management with drag-and-drop ease, coloured sequence labels, and expanded GPU accelerationsolid quality-of-life features requested by working professionals.In After Effects version 25.2 there is a new high-performance playback engine for faster previews, fresh 3D motion design tools, and support for HDR monitoring, pushing the boundaries of motion graphics and VFX workflows. Together, these updates mark a push from Adobe toward AI-assisted editing that doesnt replace the artist, but rather assists them, automating the repetitive, surfacing whats useful, and letting creators focus more on storytelling according to Adobe.Foundry: Nuke Stage Virtual Production ProductFoundry has announced Nuke Stage, a new standalone application purpose-built for virtual production and in-camera VFX (ICVFX). Designed to streamline workflows from pre-production through final pixels, Nuke Stage gives VFX artists end-to-end control over imagery and colour in a unified pipeline. The tool enables real-time playback of photoreal environments on LED walls, live compositing, and layout, using industry standards like OpenUSD, OpenEXR, and OpenColorIO.Nuke StageNuke Stage has afamiliar node-based interface consistent with Nuke, the new tool is built to deliver efficiency and flexibility for productions of all sizes. Developed in collaboration with the VFX and virtual production community, Nuke Stage fills a gap in on-set workflows by integrating VFX compositing tools into real-time production. Our hope is that Nuke Stage brings the expertise of VFX artists even closer to creative decision-making, said Christy Anzelmo, Chief Product Officer at Foundry. Industry professional Dan Hall of 80six called it a handshake between VFX and virtual production, while Framestores Connor Ling noted that familiarity with Nuke will build confidence in whats seen on the LED wall. Garden Studios Sam Kemp praised the ability to tweak 2D assets live, calling it something thats been missing from virtual production for a long time.Key features of Nuke Stage include real-time photoreal playback, live compositing, comprehensive colour support, and hardware-agnostic operation; with no specialized media servers,Nuke Stage runs on standard hardware. The tool supports 2D, 2.5D, and 3D content playback and allows teams to work in a linear, HDR-ready colour space using a consistent toolset from prep to post. As part of the Nuke ecosystem, Nuke Stage is designed to offer a seamless, efficient pipeline from the first frame captured on set through to the final delivery.NVIDIANVIDIA is showcasing Real-Time AI and Intelligent Media Workflows at NAB. NVIDIA is focused on the Blackwell platform, which serves as the foundation of NVIDIA Media2. This is a collection of NVIDIA technologies, including NVIDIA NIM microservices and NVIDIA AI Blueprints for live video analysis, accelerated computing platforms and generative AI software.NVIDIA Holoscan is also being shown for Media, as an advanced real-time AI platform designed for live media workflows and applications and their NVIDIA AI Blueprint for video search and summarization. These tools make it easy to build and customize video analytics AI agents.Cinnafilm Upconversion toolAdditionally, Cinnafilm and NVIDIA have partnered on a new AI-powered HD-to-UHD upconversion tool, which will be commercially available later this year inside Cinnafilms flagship conversion platform, PixelStrings and it is launching at the 2025 NAB Show this week. Combining Cinnafilms GPU-accelerated engine with a custom version of NVIDIA RTX AI Image Processing, the system delivers ultra-high-definition results in a single pass with what Cinnafilm says is 2530% greater detail than its current tools. Designed for speed, scalability, and top-tier visual quality, the new solution sets a fresh benchmark for media upscaling.DeepDubDeepdub has unveiled Deepdub Live, a real-time multilingual dubbing solution designed for live sports, esports, and breaking news coverage. DeepDub is audio only, but a significant step forward for broadcasters. Powered by the companys proprietary Emotive Text-to-Speech (eTTS) engine, Deepdub Live delivers expressive, emotionally nuanced voiceovers that the company claims are just like the native-language production. The eTTS system dynamically adjusts vocal tone, intensity, and energy to match the emotional cadence of live events, whether its the urgency of breaking news or the excitement of a sports final. Broadcasters can choose to use AI-cloned voices of original speakers or select from Deepdubs licensed voice bank, all cleared for broadcast and streaming. Built for enterprise deployment, the API-driven platform supports more than 100 languages and dialects, with ultra-low latency and frame-accurate synchronisation to ensure seamless, high-quality multilingual experiences in real-time.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMMichael Cioni and the New Landscape for Post & VFXAt the recent Hollywood Professional Association (HPA) Tech Retreat, Michael Cioni delivered a bold and provocative talk about the existential challenges facing Hollywood in the age of the creator economy. Framing his insights through the lens of cinematography and camera innovation, he made the case that traditional Hollywood can no longer rely on legacy systems, prestige, or scale to maintain its dominance. Platforms like YouTube and TikTok, paired with cheap, accessible production tools, have leveled the playing field.Now, creators anywherewith little more than a smartphone and an ideaare commanding global audiences that rival major studios. But while that talk focused on the front end of productioncameras, workflows, and onset innovationtheres another half of the story thats just as critical to the industrys future: post-production and VFX.In this follow-up conversation, Cioni turns his attention to what happens after the camera stops rolling. With his deep roots in post, from founding the pioneering post house LightIron, to leading innovation at Panavision, Frame.io, and Adobe, he brings a unique perspective to how technology is reshaping the back end of filmmaking. From cloud workflows and real-time collaboration to AI-powered editing and metadata-driven pipelines, Cioni argues that Hollywoods survival depends not just on changes to capturebut on radically rethinking how we finish and deliver content.In this fxpodcast, Michael brings both clarity and urgency to the conversation, drawing on decades of experience at the intersection of creativity and technology. In this conversation he discuss why post-production may be the key frontier where Hollywood either adapts or fades.He also discusses one new key solution for distributed workflow : Strada. This new technology provides an alturnative to Cloud storage in a way that is both fast and provides real flexibility.Strada Agents will be on display at the 2025 NAB Show in Las Vegas where the Strada team will demonstrate this technology breakthrough at Booth SL4807 in the South Hall.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMILM, Droids, Minitures and StageCraft Oscar Winner John Knoll discuss Skeleton CrewWhen John Knoll first heard about Star Wars: Skeleton Crew in the fall of 2021, it was through a familiar channel: ILMs Rob Bredow, asking if hed be interested in heading up visual effects on one of Lucasfilms new streaming projects. For Knoll, the prospect was immediately appealing, not just because of the storys tone or its place within what ILM affectionately calls the Favre-verse (the Jon FavreauDave Filoni era of Star Wars TV)but also because production was based in Manhattan Beach Studios, a welcome proximity bonus given how many countries projects could now shoot in.What sealed the deal, though, were the scripts. John Knoll was taken with their humour and spirit of the project (with early Speliberg overtones). His early meetings with director Jon Watts made it clear this was a collaboration built on trust and shared creative instinct.And that trust was crucial. Knoll prefers to show VFX work earlyearly enough to course-correct meaningfully, but not so early that a client might panic at a rough version. With Watts, he found a Director open to that kind of iterative process. That kind of openness meant fewer surprises laterand, more importantly, visuals that aligned closely with the storys intent.Noe the use of practicals (but with a viewing gauze on the truckFinal shotThe puzzle of working with kids (and their doubles)One of the unique challenges of Skeleton Crew was the cast itself: the show centers on a group of kids navigating a Star Wars-scale adventure. That meant tight work schedules, limited hours on set, and a heavy reliance on planning. Knoll and the team strategically structured shoot days, starting with adult doubles for wide shots and saving close-ups with the child actors for their limited hours.The use of doubles, who were carefully matched in proportions and costuming, proved more effective than expected. In some cases, edits progressed far into post before anyone even realized a particular shot featured a double, not the child actor. There are more shots than youd guess, Knoll noted. They were pretty darn good doubles.Blending miniatures and digital: the art behind the illusionSkeleton Crew continues ILMs modern embrace of miniature photography, something thats become a staple on Star Wars shows like The Mandalorian and Ahsoka. But for Knoll, this isnt about nostalgia or marketing opticsits about results.He recalls how The Mandalorian kicked off the return to miniatures with a bit of uncertainty. An early Razor Crest flyby received mixed feedback from Favreausomething about it just wasnt selling. What began as a suggestion to build a reference model for lighting escalated into a full miniature shoot, complete with motion control. The resulting footage wasnt just usableit was inspiring. ILM ended up reworking their CG Razor Crest to match the practical version, and the result was indistinguishable. That pattern repeated on Ahsoka, and again with Skeleton Crew.For Knoll, the process has become a proven workflow: shoot a beautiful miniature, use it to inform the CG model, and evolve the digital version until its virtually indistinguishable from the real thing. Whether its subtle scuff marks, directional scratches on metal panels, or the play of stylized bounce cards sculpting highlights in space, the final CG shots benefit immeasurably from miniature-informed lighting and texture references.A puppet with personality: SM-33When it came to SM-33, the shows quirky droid character, Knoll leaned on a tried-and-true method: full-body puppetry with puppeteers removed in posta technique that traces back to his work on The Phantom Menace. Drawing inspiration from Japanese Bunraku puppetry, the approach allowed for expressive, in-camera performances.https://www.fxguide.com/wp-content/uploads/2025/04/Skeleton-Crew-Creatures-Droids.mp4Legacy Effects built a beautifully detailed puppet, and the performance imbued it with a distinct, slightly wobbly charm. For the shots where the puppet couldnt do what was neededrunning, fighting, complex movementCG took over. But by carefully matching the puppets physical quirks in animation, the handoff between real and digital remained seamless. In many shots, viewers wouldnt know the switch had occurred at all. Knoll recalls a shot of SM-33 where he starts as a puppet, turns and walks away using a seamless handoff from puppet to fully CG animation part way through the shot.Invisible VFX: from faceplates to falling childrenNot all of the effects in Skeleton Crew are big establishing shots or complex visual effects space sequences. Some of the most difficult were the ones viewers arent meant to notice.Take KBs visor, for instance: a piece of costume design that needed to shift positions on her face. The mechanical motion couldnt be practically achieved, so ILM animated it digitally. But that motion had secondary consequenceslike changing her hairstyle as the visor moved, requiring CG hair replacement. Then there were the safety concerns: the visors mesh obscured vision in dark scenes, so it was sometimes omitted entirely and replaced digitally. It was clearly unsafe to have child actors walking around complex dark sets with restricted vision.Similarly, scenes involving stunts or hazardous terrain often relied on digital doubles. For example, when the kids are ejected from garbage tubes onto a beach, that fall was entirely CGno safe or practical way to shoot it live, Knoll commented. In many cases, ILM used a combination of adult stunt doubles and CG, depending on the shot and risk level.The volume and the value of real sunlightSkeletal Crew also made extensive use of ILMs StageCraft volume. But Knoll is quick to point out: its not a blanket solution. For him, its a tool best used when it makes creative and logistical sense. The real benefit comes in longer scenes, where the ability to amortize the upfront content build over many pages of dialogue outweighs the cost of per-shot VFX.And yet, despite the advances in LED volumes, Knoll remains a staunch advocate for real sunlight when it comes to outdoor scenes. Bad daytime exteriors on a soundstage, he says, are the bane of my existence. He comes armed with a library of examplesgood and badto help DPs and production designers understand the visual trade-offs. With the right planning, he says, everyone can align early to avoid the common pitfalls. Controlled chaos: simulated destruction and stylized spaceOf course, Skeleton Crew still has its share of large-scale spectacle. From tug-of-war sequences between ships and garbage munchers, to debris-littered space battles, Knolls team tackled destruction scenes by starting with animation to define story intent, and then layering simulations to support and enhance the motion.And in space, visual stylization remains key. The lighting philosophy draws from the original trilogya willingness to place mysterious bounce cards in the void for the sake of sculpted forms and cinematic readability. Youre not trying to replicate NASA footage, Knoll explains. Youre trying to make something beautiful and clear. A playbook for better VFXKnoll keeps a mental list of top production pitfalls, and he runs through them with every new team. From improperly lit interiors with mismatched window comps, to awkward soundstage exteriors, to vehicle shots with static lightinghes seen the same mistakes repeat across the industry.Knoll has compiled a personal playbook of common VFX pitfalls, which he shares with production teams early on and he shared with fxguide. His Top 5Pitfalls for VFX includes:Daytime exteriors on a soundstage.Daytime interiors with blown-out window blue screens.Static lighting on moving vehicles.Poorly designed shots with no visual solution.Imbalanced composites that simply need regrading.Most of what goes wrong on VFX shows, he says, falls into one of those buckets.But his goal isnt just to avoid problems; its to empower productions to get it right from the start, and that ethos runs through every frame of Skeleton Crew. Its a show where high-end visual effects serve a heartfelt, youthful adventureand where the seamless blend of puppetry, miniatures, CG, and practical lighting creates a galaxy that feels every bit as lived-in and magical as the Star Wars of our childhoods that inspired so many of us to enter Visual Effects.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMThe Making of Netflixs Senna ScanlineThe successful Netflix miniseries, Senna, explores the extraordinary life of Brazilian racing legend Ayrton Senna da Silva, who captured the Formula One World Drivers Championship three times. Starring Gabriel Leone, Kaya Scodelario, Matt Mella, Patrick Kennedy, Arnaud Viard, Steven Mackintosh, Camila Mrdila, and Marco Ricca, the show brings Sennas captivating story to life.Visual effects for the series were crafted by Scanline VFX, with teams working collaboratively across their facilities in Vancouver, Seoul, Montreal, and Los Angeles. Production VFX Supervisors Craig Wentworth and Marcelo Siqueira oversaw the ambitious project, delivering the powerful visuals that vividly recreate Sennas iconic racing moments.Amid a challenging time for the visual effects industry and significant shifts within Scanline itself, the quality of work on Senna highlights the studios dedication and continued excellence. This miniseries showcases the kind of technical artistry and storytelling prowess that Scanline has been known for since its founding in 1989 by Stephan Trojansky in Munich.For decades, Scanline VFX has led the industry, particularly in complex fluid simulations, notably through the groundbreaking development of their proprietary software, Flowline. Their expertise in water simulations propelled them onto the global stage, as the go-to vendor for large-scale water sims, culminating in a Scientific and Technical Achievement Academy Award for Flowline in 2008.Today, with Formula One enjoying unprecedented popularity, driven by the success of Netflixs Drive to Survive and the excitement surrounding the start of the 2025 season from Melbourne to Shanghai, the release of Senna the series has proven popular, and it serves as a compelling tribute to Ayrton Sennas legacy and a testament to the artistry and innovation of the shows visual effects artists.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMfxpodcast: We talk to Oscar-Winning Legend Rob Legato about Stability AI & the Future of VFXRob Legato, the legendary VFX supervisor behind groundbreaking films like Titanic, Hugo, and The Jungle Book, has officially joined Stability AI as Chief Pipeline Architect. Known for his pioneering work in virtual cinematography and immersive storytelling, Legatos move marks a significant step in the evolving relationship between artificial intelligence and visual effects. At Stability AI, as he explains in this weeks fxpodcast, he reunites with filmmaker and friend James Cameron, a key board member of the company, to explore new frontiers in generative AI-driven film production pipelines.Throughout his career, Legato has consistently pushed the boundaries of filmmaking technology, from developing the virtual cinematography pipeline for Avatar to revolutionizing digital production workflows in The Lion King and Hugo. As AI-powered tools like Stability AIs Stable Diffusion gain traction in creative industries, Legato sees a unique opportunity to leverage these advancements to enhance storytelling rather than replace traditional artistry. In an exclusive conversation with fxguide, he shares his perspective on AIs role in filmmaking, its ethical implications, and how it can empower artists to achieve unprecedented creative storytelling.With Stability AIs leadership teamincluding CEO Prem Akkaraju, former Weta Digital head; CTO Hanno Basse, ex-20th Century Fox CTO; and executive chair Sean Parker (Napster) Legato is joining a company at the forefront of AI innovation. As the VFX industry grapples with rapid technological shifts, his expertise could help shape a new, AI-augmented workflow that balances automation with the artistry that defines a new age in VFX and cinema in general. In this interview, Legato discusses his ambitions at Stability AI, his take on the future of VFX, and why he believes the fusion of AI and filmmaking is an evolutionnot a replacementof creative storytelling.Note: John Montgomery is on holidays this week and he will be back on the show next week0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMNew Apple Mac Studio is Great For AI (- but not the AI youre thinking of )If you check social media right now, youll find a lot of discussion surrounding Apple Intelligence. At the same time, Apple has launched a new lineup of iPads, MacBook Air models, and the latest Mac Studios. While many YouTubers and influencers have expressed disappointment with Apples approach to AI, particularly the implementation of their Apple Intelligence strategy, the company has simultaneously released one of the most powerful machines ever: the M3 Ultra Mac Studio, which is brilliant for AI.Weve been using the new M3 Ultra since its release, and for artists and TDs working doing AI in M&E , this machine is exceptionally attractive. Our M3 Ultra is far from inexpensive, but with 512GB of RAM and 12TB of storage, it is, without question, a beast when it comes to localized AI, machine learning, LLM applications, and image processing. Weve specifically been putting it through its paces with Topaz Video AI 6, which runs natively on the M3 Ultra Mac Studio.360 to 4K in Topaz Vdeio AI 6. (click to enlarge)The question that naturally follows is: why do you need this much power on a desktop? Topaz AI 6 provides a perfect answer. Unlike many AI applications, this software is not cloud-basedthough Topaz does have an early cloud prototype called the Starlight Project. For some people in production, cloud computing is a viable option, but there are compelling reasons to keep processing local. Three primary factors make local AI processing beneficial: security, speed of upload, and speed of processing. While cloud computing is an alternative for those without high-performance machines, our intense work over the past week left no doubt that in a professional environment, the M3 Ultra is an outstanding option.Security. Security concerns around AI in VFX are significant, as many clients fear that AI or machine learning tools could absorb proprietary data into cloud-based models, thereby compromising intellectual property. While this concern is often unfoundedmost AI models, including ChatGPT, do not learn from user inputs once they are built, but the client perception remains. As some great colleagues of mine recently posted, AI systems like ChatGPT are pre-trained, meaning they dont continuously learn from user interactions; instead, they operate within the confines of their initial training data. Nonetheless, some clients are adamant about avoiding AI tools altogether. With local processing, security is far easier to guarantee, giving professionals a level of control that cloud solutions struggle to match.Speed of upload. While many users have access to fast internet, uploading large filesparticularly in a remote workflowcomes at a cost. This challenge is especially pronounced for VFX professionals working with massive DPX or EXR files. In our situation, where we are focused on upscaling footage, the issue was not a case of uploading a few live action plates elements and processing them over hours, but rather dealing with vast amounts of material being repurposed. This significantly exasperates the upload problem, reinforcing the advantages of local processing.Speed of processing. This is where the M3 Ultra truly excels. With its killer M3 Ultra chip, fast RAM and high-speed storage, it processes massive datasets at an incredible rate. Rather than relying solely on benchmarks, we focus on real-world scenarios. We worked on a range of production assets for this story, some decades old and stored only in 360p resolution, others more recent, at 2K res. Our primary test involved upscaling to 4K, as it provided a meaningful balance between quality and computational demands (Topaz can scale to 8K). One specific scenario involved integrating old green screen footage with newly shot 4K material. Additionally, we experimented with extreme upscaling, pushing very low-resolution footage (360P) to 4K, to evaluate failure points and we also compared the fully released Topaz AI 6 with Topazs prototype cloud-based Project Starlight AI.Rather than discuss specifciation and the data rates of the M3 Ultra Mac Studio we explored what we can do with it, after all it was Apple who sold the original ipod as 1000 songs in your pocket, and not iPod with 5GB Hard drive.Stand alone upresing:Before diving into the green screen examples below, we first examined some traditional upscaling work performed using Topaz AI 6. In each case, the upscaling was applied to a completed shot, with clear before-and-after comparisons to showcase the improvements. Regardless of the hardware running Topaz AI 6, the results are impressive, demonstrating just how far machine learning has advanced as a powerful post-production tool.The combination of Topaz AI 6 and the M3 Ultra particularly excels when handling multiple shots. While cloud-based solutions may be viable for a single, isolated shot, the speed and efficiency of a high-performance local Mac solution are as impressive as the visual enhancements themselves.Topaz AI 6 is far from a one-click tool; it provides the flexibility to experiment with various AI models and fine-tune parameters such as motion blur, color space, and focus. The M3 Ultra tackles these complex AI-driven tasks with exceptional speed, enabling users to iterate and refine their settings efficiently. The ability to quickly generate and manipulate 5-second previews is crucial for making real-world adjustments, and in our experience, the M3 Ultra is the fastest system we have used to handle this insanely computationally intensive process.Still from a clip 1280 720 (Left) to 3840x 2160 : 4K (Right) click for larger version. High frequency details in the background and depth of field defocus are both handled extremely well.Clearly there are important issues such as providence, copyright and artist rights in any machine learning pipeline. However, upres-ing such as this is not about replacing jobs, nor dramatically removing the creative craft of the VFX. It is however all about providing incredible results that just a few years ago would have been impossible. Some of these processed shots are simply unbelievably good.The software also does SDR to HDR inverse tone mapping . It converts Rec 709, BT. 1886 (Standard Display Gamma Response) to BT. 2020, Wide Color Gamut for 4K and 8K, the color standard designed for ultra-high-definition (UHD) video.There are in some artefacts, especially on very large resolution changes, but in addition to upres-ing the same Topaz software can be used for slow-mo, noise reduction, grain removal and we bought our copy for only $299, but you can sometimes get discounts such as during Black Friday/Cyber Monday.(L) 360P upres to 4K (R). Originally shot high frame rate on a Red Epic. Click to expandOriginal Shot Slow motion: (L) 960 540 to 3840x 2160: 4K (R). Click to expandNuke: GreenScreenFor all the greenscreen comparsions we used a standard Nuke Comp setup to pull the key and examine the matte channels. Naturally, a skilled Nuke compositor could improve on the key or build on what we have done here, but the matte images below are presented as indictative rather than best of class compositing. And on the whole, they are sensible key / matte outputs (more on the Nuke setup below).Left: compressed H264 low res footage: vs. Right 4K upres . Click to expand.Greenscreen: 360p to 4KA Nearly Impossible UpscaleWe selected two sources, one being an archival MP4 clip in 360p, originally shot years ago in Las Vegas for NAB. Longtime fxguide readers may remember our infamous fxguide hangover video, (proof that what happens in Vegas doesnt always stay in Vegas). The footage had every issue imaginable: heavy compression, low resolution, and only 8-bit depth. There is no way that any reasonable imagery should be possible from such a huge leap in resolution. However, in real-world productions, sometimes archival footage is all thats available.We ran the clip through both Topaz AI 6 and the Starlight cloud tool. The processing times varied drastically between the two pipelines. The new diffussion model Starlight solution, though remarkable with its new AI-based approach, required over 50 minutes (50m 45s) to upscale just 10 seconds of footage to 2K. Meanwhile, on the Mac Studio the local 4K upscale with Topaz AI 6, (despite using a different algorithm), ran vastly faster at around 10 frames per second, (under 30 seconds for the same clip, (at higher 4K not 2K resolution!)In viewing the mattes below, the results are remarkably sharp and, most importantly, temporally stable. The stability of the algorithms outputs across successive frames is vital to ensuring that edges do not exhibit unwanted fluctuations or boiling in a composite. While the inferred hair detail is impressive, certain elementssuch as the comb in the hairstylists hands, can sometimes be lost as the program attempts to resolve motion-blurred objects.Source: Original material comped in Nuke, matte below is provided as reference.Note the source was heavily compressed 960540 H264.Special thanks to fxguidetvs Angie and Chris who appear in this clip.https://www.fxguide.com/wp-content/uploads/2025/03/Angie_960x540_comp.mp4https://www.fxguide.com/wp-content/uploads/2025/03/Angie-960x540-matte.mp4Starlight (cloud based): Original Upres-ed to HD and comped in Nuke, matte below is provided as reference.https://www.fxguide.com/wp-content/uploads/2025/03/Angie_1920x1080_comp.mp4https://www.fxguide.com/wp-content/uploads/2025/03/Angie-1920x1080-matte.mp44K Topaz AI 6 : Original Upres-ed to 4K, again comped in Nukehttps://www.fxguide.com/wp-content/uploads/2025/03/Angie_3840x2160_comp.mp4https://www.fxguide.com/wp-content/uploads/2025/03/Angie-3840x2160-matte.mp4Project Starlight leverages an AI diffusion model for video to recreate expected, missing details during upscaling, resulting in high-resolution videos that are rich with detail. The feature also operates without artifacts such as incorrect object motions, a temporal error that can occur when AI doesnt create realistic output.Interestingly, in our work, the visual results favored Starlight in most cases. Since this process relies on AI-driven image generation, some hallucinations occurred, including an instance where Topaz AI 6 mistakenly rendered part of a persons hand as an additional face. On the other hand, Topaz AI 6 was stronger in handling scene transitionsour green screen footage, extracted from an edit, showed some flickering artifacts in the Starlight version, particularly at the beginning and end of the clip.1:1 scale of the low res / compressed source to the up-res 4K. Which is an incredible jump. Note the trouble resolving the logo and the face hallucination in the hand. Click for large version.Greenscreen: Production 2K to 4KFor a more controlled and sensible production example, we used professionally shot green screen footage originally captured in RAW .R3D on an EPIC camera. However, we only had access to the exported transcoded 2K files rather than the original RAW .r3d data. This scenario mirrors real-world challenges where VFX artists must integrate older footage with modern 4K backgrounds. Given the limitations, upscaling the foreground is the only viable solution. Topaz Video AI 6 supports a range of file formats, including .png, .tif, .jpg, .dpx, and .exr. However, it processes EXR files by converting them to temporary 16-bit .tiff images, meaning that 32-bit EXRs do not retain full color information. For our test, we loaded the green screen shot as a sequence of DPX files and upscaled it to 4K.As seen below, the program excels at processing faces and people in general. Hair appears sharper and more consistent, and the results avoid traditional upscaling artifacts such as ringing and excessive edge sharpening, which are common in basic kernel-based sharpening algorithms.One of the challenges of upscaling is preserving skin texture, which can often become flattened or blurred. However, in this case, pore details are enhanced in a natural and believable way, making the results highly usable. While Nuke offers internal scaling options for any footage, Topaz AI 6 appears to deliver a superior result.Source (2K):https://www.fxguide.com/wp-content/uploads/2025/03/Original2KComp.mp4https://www.fxguide.com/wp-content/uploads/2025/03/Original2KMatte.mp44K comp:https://www.fxguide.com/wp-content/uploads/2025/03/UpRes4KComp.mp4https://www.fxguide.com/wp-content/uploads/2025/03/UpRes4KMatte.mp4Nuke (using Cattery AI)The key for this main Greenscreen shot used MODNET which is a Cattery node for roughly segmenting the character out of the green screen, then by shrinking and expanding the matte to get an inner core matter overlayed onto of the outer soft edge (hair) matte, to feed into ViMatte, which another Cattery node for creating the overall matte. Then the matte is normalised to the value between 0 and 1. The matte is then copied to the dispelled plate.The background is simple CG element over a BG plate. The CG element has depth map generated from another Cattery node called DepthAnything, the depth map was used for defocusing the CG element. Grain also added to the CG element.Then FG was graded and multiplied, comped over the BG. BG was also graded through the FG matte to bring some more hair detail back. Both 2K comp and 4K comp were using the same setup, but with some small tweaks.Our Nuke Comp setupM3 UltraThe M3 Ultra is built for extreme performance, handling the most demanding workflows that leverage high CPU and GPU core counts alongside unified memory. If you are just surfing the Web this is not for you. The M3 Ultra is built by Apple for video editors, 3D & VFX artists, and AI researchers. We have a Mac Studio with a M3 Ultra, 32-core CPU, an 80-core GPU, 512GB of unified memory with up to 819GB/s bandwidth, and 12TB of ultra-fast internal storageenough for over 12 hours of 8K ProRes footage. On paper, the M3 Ultra delivers 50% more performance than any previous Ultra chip, with 8 efficiency cores pushing CPU speeds 80% faster than the M1 Ultra, which weve used extensively on our prior four AI-driven film projects. Ultimately, the M3 Ultra Mac Studio is an undeniable powerhouse for professionals working with AI, VFX, and machine learning applications. While cloud-based AI services like Project Starlight have their place, the performance, security, and reliability of local processing make a compelling argument for investing in high-end hardware. While unconfirmed, it is hoped that Starlight will one day also be available as a download implementation able to run locally on high-end hardware. It is currently free to test on the cloud.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMNeRFs, Gaussian Splatting, and the Future of VFXExploring the Potential of Neural Rendering in Visual EffectsNeural Radiance Fields (NeRFs) and Gaussian Splatting are starting to change the way visual effects artists capture, render, and integrate photorealistic scenes, especially environments. In this fxpodcast, Sam Hodge, a machine learning software engineer currently at the Australian Institute for Machine Learning (AIML), University of Adelaide, discusses these emerging techniques and their growing role in virtual production, previsualization, and postvisualization.From Photogrammetry to Neural Radiance FieldsNeRFs offer a major evolution from traditional photogrammetry and LiDAR scanning. While photogrammetry reconstructs static 3D models with baked-in lighting, NeRFs allow for view-dependent rendering, meaning elements like specular highlights and transparency shift naturally based on the cameras perspective. This enables virtual cameras to move through a scene with realistic light transport, a key benefit for VFX workflows.The scene Sam discusses and training in NeRF StudioNeRFs in Production: Strengths and ChallengesNeRFs capture complex materials and lighting conditions with an unprecedented level of realism. However, rendering them remains computationally intensive, with each frame requiring billions of calculations per pixel. This challenge makes real-time applications difficult. To address this limitation, Gaussian Splatting (GS) has emerged as a faster alternative, trading some accuracy for major speed gains.Both NeRFs and GS are not quite ready for high end tentpole final pixel VFX pipelines, but with enormous amounts of research being published on both, it is not unreasonable to expect this to change soon. Certainly they do have a role in the wider range of AI and ML tools that are appearing and fore projects aiming for non-theatrical exhibition.Gaussian Splatting: A Faster AlternativeGaussian Splatting reconstructs 3D environments faster than NeRFs while maintaining photorealistic quality. Instead of relying on polygonal meshes, it uses elliptical points that blend together to create smooth, volumetric visuals. While it does not achieve the same level of fine detail as NeRFs, they can run in real-time, making it ideal for virtual production, game engines, and immersive AR/VR experiences.Dynamic Scenes and 4D Gaussian SplattingNeRFs traditionally struggle with moving objects, but 4D Gaussian Splatting processes time-based data, allowing for the volumetric reconstruction of dynamic performances. This makes it possible to capture live-action events, such as a tennis match or a stage performance, and view them from infinite angles. As a result, 4D Gaussian Splatting opens new possibilities for immersive media, volumetric filmmaking, and interactive experiences.Integrating Neural Rendering into VFX PipelinesVFX professionals continue exploring ways to integrate these technologies into existing pipelines. NeRFs and Gaussian Splatting do not inherently produce polygonal meshes. In the case of NeRFs they live inside a neural network, for GS, they can be thought of as a point cloud.While not immediately providing polygon solutions, they so prove valuable opportunities for: Previsualization and postvisualization, creating photorealistic environments and testing camera movements. Virtual production, providing interactive, high-fidelity backdrops on LED volumes. Location scouting, enabling filmmakers to explore digital representations of real-world locations. Performance capture and digital human references, offering new ways to train AI-driven facial and motion models.A New Era for Machine Learning in VFXThis weeks fxpodcast discussion also highlights how machine learning is reshaping the industry, particularly in generating high-quality synthetic training data. By using NeRFs and Gaussian Splatting to capture real-world actors, environments, and materials, artists can train AI models without relying on scraped internet content. This approach ensures ethical and controlled data acquisition while improving AI-based workflows.A Tool, Not a FadNeRFs and Gaussian Splatting are not passing trends. As these technologies continue to evolve, they offer real advantages for environment creation, camera planning, and immersive media. While the industry is still refining their applications, they already prove their value in production.For a deeper look at these cutting-edge technologies, listen to the full fxpodcast with Sam Hodge.https://www.fxguide.com/wp-content/uploads/2025/03/Apple_NeRF.mp4Here are the data files that Sam and Mike discuss in the podcast:https://r2.fxguide.com/articles/2025-02/prague_video_filtered.ziphttps://r2.fxguide.com/articles/2025-02/PragueInstallerImage.dmg.zipnote: you can also load the project into NeRF Studio as we have above.0 Comentários 0 Compartilhamentos
-
WWW.FXGUIDE.COMVFXShow 293 Special Ep: Amid Industry Collapses, with guest panelist Scott Ross (ex ILM and DD)Our industry is facing a very serious time. The collapse of Technicolor and its flagship VFX house MPC is a sobering reminder of the instability plaguing the visual effects industry. While external factors such as the post-COVID slowdown, the impact of industry strikes, and cash flow challenges were cited as contributing factors, many insiders also argue that years of financial mismanagement may have played a significant role. Technicolor had already been struggling, having filed for bankruptcy protection in 2020 and undergone multiple restructurings. Despite attempts to consolidate its brands and pivot to new strategies, the company failed to secure long-term financial stability.The sudden impact of MPC, a major player responsible for high-profile projects such as Mufasa: The Lion King, has left thousands of artists without work and disrupted ongoing film productions, highlighting the precarious nature of employment in VFX. This first surfaced when Technicolor notified its U.S. employees last week that it would be required to cease our U.S. operations, including studios The Mill, MPC, Mikros Animation, and Technicolor Games. Companies such as The Mill have been shining lights of VFX mastery for years and are incredibly well regarded in the industry.Beyond Technicolors specific challenges, the broader VFX industry continues to grapple with systemic issues, including cost-cutting pressures, exploitative working conditions, and an unsustainable business model. VFX houses often operate on razor-thin margins, competing in a race to the bottom due to studios demand for cheaper and faster work. This results in a cycle of overwork, burnout, and, in many cases, eventual bankruptcy, as seen with Rhythm & Hues in 2013 and now at Technicolor. The reliance on tax incentives and outsourcing further complicates matters, making VFX work highly unstable. With major vendors collapsing and industry workers facing continued uncertainty, many are calling for structural changes, including better contracts, collective bargaining, and a more sustainable production pipeline. Without meaningful reform, the industry risks seeing more historic names disappear and countless skilled artists move to other fields.Last year, Quebecs animation industry was decimated, following the strikes in Hollywood and Quebecs tax credit reduction. Over half of all animation and VFX jobs in the Canadian province, including in the production hub of Montreal, have been lost since January 1, 2023, according to the Quebec Film and Television Council (QFTC). Even after this weeks episode was recorded, today Walt Disney Animation Studios has laid off staff from its Vancouver studio as a result of a shift in business strategy from a focus on theatrical features complemented by short-form streaming content, a Disney spokesperson is reported saying, This means the studio is no longer making long-form content for Disney+. It seems each day brings more bad news.Scott RossScott RossScott Ross was a key manager at George Lucas companies in the 80s and in 90s he co-founded Digital Domain, along with James Cameron and Stan Winston.He has a new book out UPSTART: The Digital Film Revolution Managing the UnManageable. In this special episode, we spoke to Ross to gain a different perspective on the massive structural changes in VFX and the company failures rocking our industry.In the 1980s, Ross was general manager of ILM and under his leadership, ILM won five Academy Awards for Best Visual Effects (Who Framed Roger Rabbit, Innerspace, Terminator 2: Judgment Day, The Abyss, Death Becomes Her). The company re-organized in 1991 and Ross was named senior vice president of the LucasArts Entertainment Company, which comprised Skywalker Sound, LucasArts Commercial Productions, LucasArts Attractions, EditDroid/SoundDroid and ILM.Under Ross direction, from 1993 to 2006, Digital Domain garnered two Academy Awards and three nominations, receiving its first Oscar in 1997 for the ground-breaking visual effects in Titanic. That was followed by a second Oscar for What Dreams May Come. Digital Domain received additional nominations forTrue Lies,Apollo 13andI, Robot and won three Scientific and Technical Academy Awards for its proprietary software, including the birth of Nuke (now Foundry).* The highlight image for this fxpodcast story is the Pirate flag that flew once over DD headquarters in Vencie California.Dont forget to subscribe to both the VFXShow and the fxpodcast to get both of our most popular podcasts.This week in space, our lineup is:Matt Wallin * @mattwallin www.mattwallin.com.Follow Matt on Mastodon: @[emailprotected]Mike Seymour @mikeseymour. www.fxguide.com. + @mikeseymourWith special guest, Scott Ross.(Jason Diamond was on location filming this week but he will be back next week).Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.0 Comentários 0 Compartilhamentos
Mais stories