• BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4

    By TREVOR HOGG
    Images courtesy of Prime Video.

    For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”

    When Splintersplits in two, the cloning effect was inspired by cellular mitosis.

    “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”
    —Stephan Fleet, VFX Supervisor

    A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”

    Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed.

    Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.”

    “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.”
    —Stephan Fleet, VFX Supervisor

    The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.”

    A building is replaced by a massive crowd attending a rally being held by Homelander.

    In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.”

    In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around.

    “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”
    —Stephan Fleet, VFX Supervisor

    Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”

    The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes.

    Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.”

    Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination.

    Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.”

    “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”
    —Stephan Fleet, VFX Supervisor

    Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution.

    A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.”

    Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4.

    When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    #bouncing #rubber #duckies #flying #sheep
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splintersplits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.” #bouncing #rubber #duckies #flying #sheep
    WWW.VFXVOICE.COM
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splinter (Rob Benedict) splits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith [Previs Director], who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be [from Season 3], so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a human [is] you tend to want to give it human gestures and eyebrows. Erik Kripke [Creator, Executive Producer, Showrunner, Director, Writer] said, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep was [acting in a certain way] in one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr. (Simon Pegg) develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelander (Anthony Starr) breaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchie (Tomer Capone) hallucinates as Kimiko Miyashiro (Karen Fukuhara) goes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splinter (Rob Benedict) splits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker (Valorie Curry). “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    0 التعليقات 0 المشاركات
  • Mickey 17, c'est un film réalisé par Bong Joon Ho avec Robert Pattinson. DNEG a partagé quelques secrets sur les effets visuels, mais bon, c'est juste des images de science-fiction et un peu de critique sociale. Un gars qui devient "Remplaçable" dans une colonie spatiale. Pas vraiment excitant, mais ça a l'air... intéressant, je suppose. Si vous aimez ce genre de trucs, peut-être que ça vaut le coup d'œil.

    #Mickey17
    #DNEG
    #EffetsVisuels
    #BongJoonHo
    #RobertPattinson
    Mickey 17, c'est un film réalisé par Bong Joon Ho avec Robert Pattinson. DNEG a partagé quelques secrets sur les effets visuels, mais bon, c'est juste des images de science-fiction et un peu de critique sociale. Un gars qui devient "Remplaçable" dans une colonie spatiale. Pas vraiment excitant, mais ça a l'air... intéressant, je suppose. Si vous aimez ce genre de trucs, peut-être que ça vaut le coup d'œil. #Mickey17 #DNEG #EffetsVisuels #BongJoonHo #RobertPattinson
    Mickey 17 : DNEG dévoile les secrets des VFX
    FxGuide a mis en ligne une interview en compagne de DNEG sur les effets visuels du film Mickey 17, réalisé par Bong Joon Ho et avec Robert Pattinson dans le rôle principal. Cette comédie noire mêlant science-fiction et critique sociale met en scène u
    1 التعليقات 0 المشاركات
  • Dorothy Ballarini, character and creature artist, 3D modeling, Brazilian artist, Jurassic World, La Petite Sirène, Blanche-Neige, DNEG, Cinesite, MPC, Framestore

    ---

    In the shadows of the vibrant Brazilian landscape, where dreams intertwine with the pain of reality, a quiet artist breathes life into the depths of imagination. Dorothy Ballarini, a name that resonates with both magic and sorrow, is a master of character and creature design, conjuring beings that are at once beautiful and hauntin...
    Dorothy Ballarini, character and creature artist, 3D modeling, Brazilian artist, Jurassic World, La Petite Sirène, Blanche-Neige, DNEG, Cinesite, MPC, Framestore --- In the shadows of the vibrant Brazilian landscape, where dreams intertwine with the pain of reality, a quiet artist breathes life into the depths of imagination. Dorothy Ballarini, a name that resonates with both magic and sorrow, is a master of character and creature design, conjuring beings that are at once beautiful and hauntin...
    **The Heartbreak of Creation: Dorothy Ballarini and the Art of Characters and Creatures**
    Dorothy Ballarini, character and creature artist, 3D modeling, Brazilian artist, Jurassic World, La Petite Sirène, Blanche-Neige, DNEG, Cinesite, MPC, Framestore --- In the shadows of the vibrant Brazilian landscape, where dreams intertwine with the pain of reality, a quiet artist breathes life into the depths of imagination. Dorothy Ballarini, a name that resonates with both magic and...
    Like
    Love
    Wow
    Sad
    Angry
    138
    1 التعليقات 0 المشاركات
  • Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue

    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL!
    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL. It’s a deep dive into the visual effects of Bong Joon Ho’s Mickey 17, starring Robert Pattinson.
    The film contains creatures, spacecraft, snow-filled landscapes and several scenes where actor Robert Pattinson appears as two ‘expendable’ clone characters—Mickey 17 and Mickey 18—on screen at the same time.

    The new issue explores this twinning work, as well as going into detail on the creatures and environment visual effects largely orchestrated by DNEG, Framestore, Rising Sun Pictures and Turncoat Pictures.

    You can grab the issue in PRINT from Amazon, or as a DIGITAL EDITION on Patreon.
    Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released.

    Hope you enjoy the latest issue!
    Here’s the links to various Amazon stores:
    USA: 
    UK: 
    Canada: 
    Germany: 
    France: 
    Spain: 
    Italy: 
    Australia: 
    Japan: 
    Sweden: 
    Poland: 
    Netherlands: 
     
    The post Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue appeared first on befores & afters.
    #twinning #creepers #more #vfx #covered
    Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue
    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL! Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL. It’s a deep dive into the visual effects of Bong Joon Ho’s Mickey 17, starring Robert Pattinson. The film contains creatures, spacecraft, snow-filled landscapes and several scenes where actor Robert Pattinson appears as two ‘expendable’ clone characters—Mickey 17 and Mickey 18—on screen at the same time. The new issue explores this twinning work, as well as going into detail on the creatures and environment visual effects largely orchestrated by DNEG, Framestore, Rising Sun Pictures and Turncoat Pictures. You can grab the issue in PRINT from Amazon, or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. Hope you enjoy the latest issue! Here’s the links to various Amazon stores: USA:  UK:  Canada:  Germany:  France:  Spain:  Italy:  Australia:  Japan:  Sweden:  Poland:  Netherlands:    The post Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue appeared first on befores & afters. #twinning #creepers #more #vfx #covered
    BEFORESANDAFTERS.COM
    Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue
    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL! Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL. It’s a deep dive into the visual effects of Bong Joon Ho’s Mickey 17, starring Robert Pattinson. The film contains creatures (the Creepers), spacecraft, snow-filled landscapes and several scenes where actor Robert Pattinson appears as two ‘expendable’ clone characters—Mickey 17 and Mickey 18—on screen at the same time. The new issue explores this twinning work, as well as going into detail on the creatures and environment visual effects largely orchestrated by DNEG, Framestore, Rising Sun Pictures and Turncoat Pictures. You can grab the issue in PRINT from Amazon (that’s the US store, make sure you try your local Amazon store, too), or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. Hope you enjoy the latest issue! Here’s the links to various Amazon stores: USA: https://www.amazon.com/dp/B0FCYRV86J UK: https://www.amazon.co.uk/dp/B0FCYRV86J Canada: https://www.amazon.ca/dp/B0FCYRV86J Germany: https://www.amazon.de/dp/B0FCYRV86J France: https://www.amazon.fr/dp/B0FCYRV86J Spain: https://www.amazon.es/dp/B0FCYRV86J Italy: https://www.amazon.it/dp/B0FCYRV86J Australia: https://www.amazon.com.au/dp/B0FCYRV86J Japan: https://www.amazon.co.jp/dp/B0FCYRV86J Sweden: https://www.amazon.se/dp/B0FCYRV86J Poland: https://www.amazon.pl/dp/B0FCYRV86J Netherlands: https://www.amazon.nl/dp/B0FCYRV86J   The post Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue appeared first on befores & afters.
    Like
    Love
    Wow
    Angry
    Sad
    443
    2 التعليقات 0 المشاركات
  • The Wheel of Time postviz reel from Proof

    For Season 3 of Amazon’s The Wheel of Time, Proof Inc. reimagined post-visualization, developing an innovative “Sketchvis” pipeline that blurred the boundaries between previs, postvis, and final VFX. Under Supervisor Steve Harrison, Proof created over 35 minutes of intricate, stylized visualizations across all eight episodes, establishing an expressive visual foundation for the series’ complex magical elements known as “channeling.”
    Proof’s Sketchvis combined 2D artistry with sophisticated 3D execution using Maya and Nuke, complemented by vibrant glows and intricate distortion effects. Each spell’s distinct energy was carefully choreographed, whether corkscrewing beams of power or serpentine streams of water, closely aligning with the narrative’s elemental logic and dramatically influencing the show’s pacing and visual storytelling.

    Working closely in daily collaboration with Production and VFX Supervisor Andy Scrase, the Proof team took on design challenges typically reserved for final VFX vendors like Framestore and DNEG. This proactive approach allowed Proof to define not only the aesthetic but also the motion logic of key magical sequences, creating a precise roadmap that remarkably mirrors what audiences will experience in the final episodes.
    For Proof, traditionally known for character animation and environmental previs, this venture into nuanced effect design and movement choreography represented both a creative challenge and a significant expansion of their artistic repertoire, adding to the visual texture of The Wheel of Time and pushing post-visualization into compelling new creative territory. The team contributed to all eight episodes with a core team of six artists. Proof’s ability to step beyond previs and postvis into effect design and movement development made them a key partner, enhancing in-camera performances and helping shape the visual language of the series.
    #wheel #time #postviz #reel #proof
    The Wheel of Time postviz reel from Proof
    For Season 3 of Amazon’s The Wheel of Time, Proof Inc. reimagined post-visualization, developing an innovative “Sketchvis” pipeline that blurred the boundaries between previs, postvis, and final VFX. Under Supervisor Steve Harrison, Proof created over 35 minutes of intricate, stylized visualizations across all eight episodes, establishing an expressive visual foundation for the series’ complex magical elements known as “channeling.” Proof’s Sketchvis combined 2D artistry with sophisticated 3D execution using Maya and Nuke, complemented by vibrant glows and intricate distortion effects. Each spell’s distinct energy was carefully choreographed, whether corkscrewing beams of power or serpentine streams of water, closely aligning with the narrative’s elemental logic and dramatically influencing the show’s pacing and visual storytelling. Working closely in daily collaboration with Production and VFX Supervisor Andy Scrase, the Proof team took on design challenges typically reserved for final VFX vendors like Framestore and DNEG. This proactive approach allowed Proof to define not only the aesthetic but also the motion logic of key magical sequences, creating a precise roadmap that remarkably mirrors what audiences will experience in the final episodes. For Proof, traditionally known for character animation and environmental previs, this venture into nuanced effect design and movement choreography represented both a creative challenge and a significant expansion of their artistic repertoire, adding to the visual texture of The Wheel of Time and pushing post-visualization into compelling new creative territory. The team contributed to all eight episodes with a core team of six artists. Proof’s ability to step beyond previs and postvis into effect design and movement development made them a key partner, enhancing in-camera performances and helping shape the visual language of the series. #wheel #time #postviz #reel #proof
    The Wheel of Time postviz reel from Proof
    For Season 3 of Amazon’s The Wheel of Time, Proof Inc. reimagined post-visualization, developing an innovative “Sketchvis” pipeline that blurred the boundaries between previs, postvis, and final VFX. Under Supervisor Steve Harrison, Proof created over 35 minutes of intricate, stylized visualizations across all eight episodes, establishing an expressive visual foundation for the series’ complex magical elements known as “channeling.” Proof’s Sketchvis combined 2D artistry with sophisticated 3D execution using Maya and Nuke, complemented by vibrant glows and intricate distortion effects. Each spell’s distinct energy was carefully choreographed, whether corkscrewing beams of power or serpentine streams of water, closely aligning with the narrative’s elemental logic and dramatically influencing the show’s pacing and visual storytelling. Working closely in daily collaboration with Production and VFX Supervisor Andy Scrase, the Proof team took on design challenges typically reserved for final VFX vendors like Framestore and DNEG. This proactive approach allowed Proof to define not only the aesthetic but also the motion logic of key magical sequences, creating a precise roadmap that remarkably mirrors what audiences will experience in the final episodes. For Proof, traditionally known for character animation and environmental previs, this venture into nuanced effect design and movement choreography represented both a creative challenge and a significant expansion of their artistic repertoire, adding to the visual texture of The Wheel of Time and pushing post-visualization into compelling new creative territory. The team contributed to all eight episodes with a core team of six artists. Proof’s ability to step beyond previs and postvis into effect design and movement development made them a key partner, enhancing in-camera performances and helping shape the visual language of the series.
    Like
    Love
    Wow
    Angry
    Sad
    226
    0 التعليقات 0 المشاركات
  • The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)

    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2.
    With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature.
    Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series?
    Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show.
    Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career. 
    Photograph by Liane Hentscher/HBO
    How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season?
    Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season.
    The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season?
    Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs.
    Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required.

    The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season?
    Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming.
    Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle. 
    What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic?
    Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences.
    Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover.
    Photograph by Liane Hentscher/HBO
    The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did?
    Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs.
    Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence. 

    Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects?
    Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours.
    Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot.
    Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation.
    The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles.
    We had over a hundred shots in episode 2 that required CG Infected horde.
    Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts.

    The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment?
    Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves. 
    The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters?
    Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence.
    During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it!
    When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule.

    Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force. 
    During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain. 

    Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance?
    Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves. 

    Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city?
    Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty.
    Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic?
    Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots.
    Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp
    it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston.
    Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game. 

    The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment?
    Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings.
    Photograph by Liane Hentscher/HBO
    The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects?
    Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal.
    When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement.
    Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth.
    Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint?
    Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season!
    Photograph by Liane Hentscher/HBO
    Looking back on the project, what aspects of the visual effects are you most proud of?
    Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable.
    Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light. 
    How long have you worked on this show?
    Alex Wang // I’ve been on this season for nearly two years.
    Fiona Campbell Westgate // A little over one year; I joined the show in April 2024.
    What’s the VFX shots count?
    Alex Wang // We had just over 2,500 shots this Season.
    Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots.
    What is your next project?
    Fiona Campbell Westgate // Stay tuned…
    A big thanks for your time.
    WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website.
    © Vincent Frei – The Art of VFX – 2025
    #last #season #alex #wang #production
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025 #last #season #alex #wang #production
    WWW.ARTOFVFX.COM
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Mace (DFX Supervisor) led a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025
    Like
    Love
    Wow
    Sad
    Angry
    192
    0 التعليقات 0 المشاركات
  • HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION

    By CHRIS McGOWAN

    This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCamshows stunning details of the majestic planet in infrared light.Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey. Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studioproduces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging.
    A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studioat the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award.

    This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere.“The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.”
    —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization StudioAbout his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.”

    This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look.The Gulf Stream and connected currents.Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars.WORKING WITH DATA
    While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.”
    He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.”

    Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface.HOUDINI AND OTHER TOOLS
    “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.”

    Satellite imagery from NASA’s Solar Dynamics Observatoryshows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space.Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.”

    While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between.The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar.Another visualization of the black hole Gargantua.INTERSTELLAR & GARGANTUA
    Christensen recalls working for DNEG on Interstellar. “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.”
    He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.”

    Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.”

    The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination.FURTHER CHALLENGES
    The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.”
    SVS also works closely with its NASA partner groups Conceptual Image Laband Goddard Media Studiosto publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public.

    An impact crater on the moon.Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image.Mars. Hellas Basin can be seen in the lower right portion of the image.Mars slightly tilted to show the Martian North Pole.Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.”
    #hollywood #vfx #tools #space #exploration
    HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION
    By CHRIS McGOWAN This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCamshows stunning details of the majestic planet in infrared light.Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey. Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studioproduces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging. A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studioat the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award. This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere.“The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.” —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization StudioAbout his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.” This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look.The Gulf Stream and connected currents.Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars.WORKING WITH DATA While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.” He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.” Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface.HOUDINI AND OTHER TOOLS “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.” Satellite imagery from NASA’s Solar Dynamics Observatoryshows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space.Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.” While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between.The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar.Another visualization of the black hole Gargantua.INTERSTELLAR & GARGANTUA Christensen recalls working for DNEG on Interstellar. “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.” He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.” Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.” The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination.FURTHER CHALLENGES The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.” SVS also works closely with its NASA partner groups Conceptual Image Laband Goddard Media Studiosto publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public. An impact crater on the moon.Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image.Mars. Hellas Basin can be seen in the lower right portion of the image.Mars slightly tilted to show the Martian North Pole.Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.” #hollywood #vfx #tools #space #exploration
    WWW.VFXVOICE.COM
    HOLLYWOOD VFX TOOLS FOR SPACE EXPLORATION
    By CHRIS McGOWAN This image of Jupiter from NASA’s James Webb Space Telescope’s NIRCam (Near-Infrared Camera) shows stunning details of the majestic planet in infrared light. (Image courtesy of NASA, ESA and CSA) Special effects have been used for decades to depict space exploration, from visits to planets and moons to zero gravity and spaceships – one need only think of the landmark 2001: A Space Odyssey (1968). Since that era, visual effects have increasingly grown in realism and importance. VFX have been used for entertainment and for scientific purposes, outreach to the public and astronaut training in virtual reality. Compelling images and videos can bring data to life. NASA’s Scientific Visualization Studio (SVS) produces visualizations, animations and images to help scientists tell stories of their research and make science more approachable and engaging. A.J. Christensen is a senior visualization designer for the NASA Scientific Visualization Studio (SVS) at the Goddard Space Flight Center in Greenbelt, Maryland. There, he develops data visualization techniques and designs data-driven imagery for scientific analysis and public outreach using Hollywood visual effects tools, according to NASA. SVS visualizations feature datasets from Earth-and space-based instrumentation, scientific supercomputer models and physical statistical distributions that have been analyzed and processed by computational scientists. Christensen’s specialties include working with 3D volumetric data, using the procedural cinematic software Houdini and science topics in Heliophysics, Geophysics and Astrophysics. He previously worked at the National Center for Supercomputing Applications’ Advanced Visualization Lab where he worked on more than a dozen science documentary full-dome films as well as the IMAX films Hubble 3D and A Beautiful Planet – and he worked at DNEG on the movie Interstellar, which won the 2015 Best Visual Effects Academy Award. This global map of CO2 was created by NASA’s Scientific Visualization Studio using a model called GEOS, short for the Goddard Earth Observing System. GEOS is a high-resolution weather reanalysis model, powered by supercomputers, that is used to represent what was happening in the atmosphere. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video.” —A.J. Christensen, Senior Visualization Designer, NASA Scientific Visualization Studio (SVS) About his work at NASA SVS, Christensen comments, “The NASA Scientific Visualization Studio operates like a small VFX studio that creates animations of scientific data that has been collected or analyzed at NASA. We are one of several groups at NASA that create imagery for public consumption, but we are also a part of the scientific research process, helping scientists understand and share their data through pictures and video. This past year we were part of NASA’s total eclipse outreach efforts, we participated in all the major earth science and astronomy conferences, we launched a public exhibition at the Smithsonian Museum of Natural History called the Earth Information Center, and we posted hundreds of new visualizations to our publicly accessible website: svs.gsfc.nasa.gov.” This is the ‘beauty shot version’ of Perpetual Ocean 2: Western Boundary Currents. The visualization starts with a rotating globe showing ocean currents. The colors used to color the flow in this version were chosen to provide a pleasing look. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) The Gulf Stream and connected currents. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Venus, our nearby “sister” planet, beckons today as a compelling target for exploration that may connect the objects in our own solar system to those discovered around nearby stars. (Image courtesy of NASA’s Goddard Space Flight Center) WORKING WITH DATA While Christensen is interpreting the data from active spacecraft and making it usable in different forms, such as for science and outreach, he notes, “It’s not just spacecraft that collect data. NASA maintains or monitors instruments on Earth too – on land, in the oceans and in the air. And to be precise, there are robots wandering around Mars that are collecting data, too.” He continues, “Sometimes the data comes to our team as raw telescope imagery, sometimes we get it as a data product that a scientist has already analyzed and extracted meaning from, and sometimes various sensor data is used to drive computational models and we work with the models’ resulting output.” Jupiter’s moon Europa may have life in a vast ocean beneath its icy surface. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) HOUDINI AND OTHER TOOLS “Data visualization means a lot of different things to different people, but many people on our team interpret it as a form of filmmaking,” Christensen says. “We are very inspired by the approach to visual storytelling that Hollywood uses, and we use tools that are standard for Hollywood VFX. Many professionals in our area – the visualization of 3D scientific data – were previously using other animation tools but have discovered that Houdini is the most capable of understanding and manipulating unusual data, so there has been major movement toward Houdini over the past decade.” Satellite imagery from NASA’s Solar Dynamics Observatory (SDO) shows the Sun in ultraviolet light colorized in light brown. Seen in ultraviolet light, the dark patches on the Sun are known as coronal holes and are regions where fast solar wind gushes out into space. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Christensen explains, “We have always worked with scientific software as well – sometimes there’s only one software tool in existence to interpret a particular kind of scientific data. More often than not, scientific software does not have a GUI, so we’ve had to become proficient at learning new coding environments very quickly. IDL and Python are the generic data manipulation environments we use when something is too complicated or oversized for Houdini, but there are lots of alternatives out there. Typically, we use these tools to get the data into a format that Houdini can interpret, and then we use Houdini to do our shading, lighting and camera design, and seamlessly blend different datasets together.” While cruising around Saturn in early October 2004, Cassini captured a series of images that have been composed into this large global natural color view of Saturn and its rings. This grand mosaic consists of 126 images acquired in a tile-like fashion, covering one end of Saturn’s rings to the other and the entire planet in between. (Image courtesy of ASA/JPL/Space Science Institute) The black hole Gargantua and the surrounding accretion disc from the 2014 movie Interstellar. (Image courtesy of DNEG and Paramount Pictures) Another visualization of the black hole Gargantua. (Image courtesy of DNEG and Paramount Pictures) INTERSTELLAR & GARGANTUA Christensen recalls working for DNEG on Interstellar (2014). “When I first started at DNEG, they asked me to work on the giant waves on Miller’s ocean planet [in the film]. About a week in, my manager took me into the hall and said, ‘I was looking at your reel and saw all this astronomy stuff. We’re working on another sequence with an accretion disk around a black hole that I’m wondering if we should put you on.’ And I said, ‘Oh yeah, I’ve done lots of accretion disks.’ So, for the rest of my time on the show, I was working on the black hole team.” He adds, “There are a lot of people in my community that would be hesitant to label any big-budget movie sequence as a scientific visualization. The typical assumption is that for a Hollywood movie, no one cares about accuracy as long as it looks good. Guardians of the Galaxy makes it seem like space is positively littered with nebulae, and Star Wars makes it seem like asteroids travel in herds. But the black hole Gargantua in Interstellar is a good case for being called a visualization. The imagery you see in the movie is the direct result of a collaboration with an expert scientist, Dr. Kip Thorne, working with the DNEG research team using the actual Einstein equations that describe the gravity around a black hole.” Thorne is a Nobel Prize-winning theoretical physicist who taught at Caltech for many years. He has reached wide audiences with his books and presentations on black holes, time travel and wormholes on PBS and BBC shows. Christensen comments, “You can make the argument that some of the complexity around what a black hole actually looks like was discarded for the film, and they admit as much in the research paper that was published after the movie came out. But our team at NASA does that same thing. There is no such thing as an objectively ‘true’ scientific image – you always have to make aesthetic decisions around whether the image tells the science story, and often it makes more sense to omit information to clarify what’s important. Ultimately, Gargantua taught a whole lot of people something new about science, and that’s what a good scientific visualization aims to do.” The SVS produces an annual visualization of the Moon’s phase and libration comprising 8,760 hourly renderings of its precise size, orientation and illumination. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) FURTHER CHALLENGES The sheer size of the data often encountered by Christensen and his peers is a challenge. “I’m currently working with a dataset that is 400GB per timestep. It’s so big that I don’t even want to move it from one file server to another. So, then I have to make decisions about which data attributes to keep and which to discard, whether there’s a region of the data that I can cull or downsample, and I have to experiment with data compression schemes that might require me to entirely re-design the pipeline I’m using for Houdini. Of course, if I get rid of too much information, it becomes very resource-intensive to recompute everything, but if I don’t get rid of enough, then my design process becomes agonizingly slow.” SVS also works closely with its NASA partner groups Conceptual Image Lab (CIL) and Goddard Media Studios (GMS) to publish a diverse array of content. Conceptual Image Lab focuses more on the artistic side of things – producing high-fidelity renders using film animation and visual design techniques, according to NASA. Where the SVS primarily focuses on making data-based visualizations, CIL puts more emphasis on conceptual visualizations – producing animations featuring NASA spacecraft, planetary observations and simulations, according to NASA. Goddard Media Studios, on the other hand, is more focused towards public outreach – producing interviews, TV programs and documentaries. GMS continues to be the main producers behind NASA TV, and as such, much of their content is aimed towards the general public. An impact crater on the moon. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Image of Mars showing a partly shadowed Olympus Mons toward the upper left of the image. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Mars. Hellas Basin can be seen in the lower right portion of the image. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Mars slightly tilted to show the Martian North Pole. (Image courtesy of NASA/Goddard Space Flight Center Scientific Visualization Studio) Christensen notes, “One of the more unique challenges in this field is one of bringing people from very different backgrounds to agree on a common outcome. I work on teams with scientists, communicators and technologists, and we all have different communities we’re trying to satisfy. For instance, communicators are generally trying to simplify animations so their learning goal is clear, but scientists will insist that we add text and annotations on top of the video to eliminate ambiguity and avoid misinterpretations. Often, the technologist will have to say we can’t zoom in or look at the data in a certain way because it will show the data boundaries or data resolution limits. Every shot is a negotiation, but in trying to compromise, we often push the boundaries of what has been done before, which is exciting.”
    Like
    Love
    Wow
    Angry
    Sad
    144
    0 التعليقات 0 المشاركات
  • Classics – The Art Of Game Of Thrones by Marc Simonetti

    Marc Simonetti is a French concept artist and illustrator. Best known for his work on GRR Martin’s books “A Song of Ice and Fire”, and his Iron Throne, he’s also illustrated some of the most well-known fantasy and SciFi novels, such as The Discworld by Terry Pratchett, The Royal Assassin trilogy by Robin Hobb, Terry Goodkind’s “Sword of truth”, or Dune Saga by Frank Herbert.He’s also worked for many video game companies such as Activision, Ubisoft, Magic The Gathering, EA, Square Enix, and King Isle Entertainment. He has just released an art book, “Coverama,” and is currently working on several projects, including long feature films and concept art for video games, as a freelancer.His most recent work as a concept artist, which includes creating visual development and staging dramatic lighting and designs, is featured in Aladdin, Maleficent 2, Aquaman 2, and the upcoming Transformers Movie, Rise of the Beasts, among many others. He also serves as the Art Director at DNEG, one of the world’s leading visual effects and animation studios.
    #classics #art #game #thrones #marc
    Classics – The Art Of Game Of Thrones by Marc Simonetti
    Marc Simonetti is a French concept artist and illustrator. Best known for his work on GRR Martin’s books “A Song of Ice and Fire”, and his Iron Throne, he’s also illustrated some of the most well-known fantasy and SciFi novels, such as The Discworld by Terry Pratchett, The Royal Assassin trilogy by Robin Hobb, Terry Goodkind’s “Sword of truth”, or Dune Saga by Frank Herbert.He’s also worked for many video game companies such as Activision, Ubisoft, Magic The Gathering, EA, Square Enix, and King Isle Entertainment. He has just released an art book, “Coverama,” and is currently working on several projects, including long feature films and concept art for video games, as a freelancer.His most recent work as a concept artist, which includes creating visual development and staging dramatic lighting and designs, is featured in Aladdin, Maleficent 2, Aquaman 2, and the upcoming Transformers Movie, Rise of the Beasts, among many others. He also serves as the Art Director at DNEG, one of the world’s leading visual effects and animation studios. #classics #art #game #thrones #marc
    WWW.IAMAG.CO
    Classics – The Art Of Game Of Thrones by Marc Simonetti
    Marc Simonetti is a French concept artist and illustrator. Best known for his work on GRR Martin’s books “A Song of Ice and Fire”, and his Iron Throne, he’s also illustrated some of the most well-known fantasy and SciFi novels, such as The Discworld by Terry Pratchett, The Royal Assassin trilogy by Robin Hobb, Terry Goodkind’s “Sword of truth”, or Dune Saga by Frank Herbert.He’s also worked for many video game companies such as Activision, Ubisoft, Magic The Gathering, EA, Square Enix, and King Isle Entertainment. He has just released an art book, “Coverama,” and is currently working on several projects, including long feature films and concept art for video games, as a freelancer.His most recent work as a concept artist, which includes creating visual development and staging dramatic lighting and designs, is featured in Aladdin, Maleficent 2, Aquaman 2, and the upcoming Transformers Movie, Rise of the Beasts, among many others. He also serves as the Art Director at DNEG, one of the world’s leading visual effects and animation studios.
    11 التعليقات 0 المشاركات
  • Mickey 17: Stuart Penn – VFX Supervisor – Framestore

    Interviews

    Mickey 17: Stuart Penn – VFX Supervisor – Framestore

    By Vincent Frei - 27/05/2025

    When we last spoke with Stuart Penn in 2019, he walked us through Framestore’s work on Avengers: Endgame. He has since added The Aeronauts, Moon Knight, 1899, and Flite to his impressive list of credits.
    How did you get involved on this show?
    Soon after we had been awarded work, Director Bong visited our London Studio in May 2022 to meet us and share his vision with us.

    How was the sequences made by Framestore?
    Framestore was responsible for the development of the Baby and Mama Creepers. We worked on the shots of the Baby Creepers within the ship, and the Creepers in the caves and the ice crevasse. We developed the ice cave and crevasse environments, including a full-CG shot of Mickey falling into the crevasse.
    Within the ship we were also responsible for the cycler room with its lava pit, the human printer, a range of set extensions, Marshall’s beautiful rock and—one of my personal favourites—Pigeon Man’s spinning eyes. We also crafted the spacewalk sequence. All the work came out of our London and Mumbai studios.

    Bong Joon Ho has a very distinct visual storytelling style. How did you collaborate with him to ensure the VFX aligned with his vision, and were there any unexpected creative challenges that pushed the team in new directions?
    Director Bong was fun to work with, very collaborative and had a very clear vision of where the film was going. We had discussions before and during the shoot. While we were shooting, Director Bong would talk to us about the backstory of what the Creepers might be thinking that went beyond the scope of what we would see in the movie. This really helped with giving the creatures character.

    Can you walk us through the design and animation process for the baby and mother creepers? What references or inspirations helped shape their look and movement?
    Director Bong had been working with his creature designer, Heechul Jang, for many months before production started. We had kickoffs with Director Bong and Heechul that provided us with some of the best and most thought out concepts I think we’ve ever received. Director Bong set us the challenge of bringing them to life. We took the lead on the Baby and Mama Creepers and DNEG took on the Juniors.
    It’s fun to note that the energy and inquisitive nature of the Babies was inspired by reference footage of puppies.

    Were these creatures primarily CG, or was there any practical element involved? How did you ensure their integration into the live-action footage?
    They were all CG in the final film. On set we had a range of stuffies and mockups for actors to interact with and for lighting reference. People became quite attached to the baby creeper stuffies! For the Mama there was a head and large frame that was controlled by a team of puppeteers for eyeline and lighting reference.

    The ice cave has a very distinct visual style. How did you achieve the look of the ice, and what techniques were used to create the lighting and atmospheric effects inside the cave?
    I was sent to Iceland for a week to gather reference. I visited a range of ice cave locations—driving, hiking and being dropped by helicopter at various locations across a glacier. This reference provided the basis for the look of the caves. The ice was rendered fully refractive with interior volumes to create the structures. As it’s so computationally expensive to render we used tricks where we could reproject a limited number of fully rendered frames. This worked best on lock offs or small camera moves—others we just had to render full length.

    How were the scenes featuring multiple Mickeys filmed? Did you rely mostly on motion control, digital doubles, or a combination of techniques to seamlessly integrate the clones into the shots?
    For our shots it was mostly multiple plates relying on the skill of camera operators to match the framing and move and the comp work to either split frames or lift one of the Mickeys from a plate and replace the stand-in.

    Since Mickey’s clones are central to the story, what were the biggest VFX challenges in making them interact convincingly? Were there any specific techniques used to differentiate them visually or subtly show their progression over time?
    This really all came down to Robert Pattinson’s performances. He would usually be acting with his double for interaction and lighting. They would then switch positions and redo the performance. Robs could switch between the Mickey 17 and 18 characters with the assistance of quick hair and makeup changes.
    The prison environment seems to have a unique aesthetic and mood. How much of it was built practically, and how did VFX contribute to enhancing or extending the set?
    The foreground cells and storage containers were practical and everything beyond the fence was CG with a DMP overlay. The containers going off into the distance were carefully positioned and lit to enable you to feel the vast scale of the ship. We also replaced the fence in most shots with CG as it was easier than rotoing through the chain links.
    When Mickey is outside the ship, exposed to radiation, there are several extreme body effects, including his hand coming off. Can you discuss the challenges of creating these sequences, particularly in terms of digital prosthetics and damage simulations?
    Knocking Mickey’s hand off was quite straight forward due the speed of the impact. We started with a plate of the practical arm and glove and switch to a pre-sculpted CG glove and arm stump. The hand spinning off into the distance was hand animated to allow us to fully art direct the spin and trajectory. The final touch was to add and FX sim for the blood droplets.
    How did you balance realism and stylization in depicting the effects of radiation exposure? Were there real-world references or scientific studies that guided the look of the damage?
    Most of the radiation effects came from great make up and prosthetics—we just added some final touches such as an FX sim for a bursting blister. We tried a few different simulations based on work we had none on previous shows but ultimately dialed it back to something more subtle so it didn’t distract from the moment.

    Were there any memorable moments or scenes from the film that you found particularly rewarding or challenging to work on from a visual effects standpoint?
    There were a lot of quite diverse challenges. From creature work, environments, lava to a lot of ‘one off’ effects. The shot where the Creepers are pushing Mickey out into the snow was particularly challenging, with so many Creepers interacting with each other and Mickey, it took the combination of several animators and compositors to bring it together and integrate with the partial CG Mickey.

    Looking back on the project, what aspects of the visual effects are you most proud of?
    The baby creeper and the Ice cave environment.
    How long have you worked on this show?
    I worked on it for about 18 months
    What’s the VFX shots count?
    Framestore worked on 405 shots.
    A big thanks for your time.
    WANT TO KNOW MORE?Framestore: Dedicated page about Mickey 17 on Framestore website.
    © Vincent Frei – The Art of VFX – 2025
    #mickey #stuart #penn #vfx #supervisor
    Mickey 17: Stuart Penn – VFX Supervisor – Framestore
    Interviews Mickey 17: Stuart Penn – VFX Supervisor – Framestore By Vincent Frei - 27/05/2025 When we last spoke with Stuart Penn in 2019, he walked us through Framestore’s work on Avengers: Endgame. He has since added The Aeronauts, Moon Knight, 1899, and Flite to his impressive list of credits. How did you get involved on this show? Soon after we had been awarded work, Director Bong visited our London Studio in May 2022 to meet us and share his vision with us. How was the sequences made by Framestore? Framestore was responsible for the development of the Baby and Mama Creepers. We worked on the shots of the Baby Creepers within the ship, and the Creepers in the caves and the ice crevasse. We developed the ice cave and crevasse environments, including a full-CG shot of Mickey falling into the crevasse. Within the ship we were also responsible for the cycler room with its lava pit, the human printer, a range of set extensions, Marshall’s beautiful rock and—one of my personal favourites—Pigeon Man’s spinning eyes. We also crafted the spacewalk sequence. All the work came out of our London and Mumbai studios. Bong Joon Ho has a very distinct visual storytelling style. How did you collaborate with him to ensure the VFX aligned with his vision, and were there any unexpected creative challenges that pushed the team in new directions? Director Bong was fun to work with, very collaborative and had a very clear vision of where the film was going. We had discussions before and during the shoot. While we were shooting, Director Bong would talk to us about the backstory of what the Creepers might be thinking that went beyond the scope of what we would see in the movie. This really helped with giving the creatures character. Can you walk us through the design and animation process for the baby and mother creepers? What references or inspirations helped shape their look and movement? Director Bong had been working with his creature designer, Heechul Jang, for many months before production started. We had kickoffs with Director Bong and Heechul that provided us with some of the best and most thought out concepts I think we’ve ever received. Director Bong set us the challenge of bringing them to life. We took the lead on the Baby and Mama Creepers and DNEG took on the Juniors. It’s fun to note that the energy and inquisitive nature of the Babies was inspired by reference footage of puppies. Were these creatures primarily CG, or was there any practical element involved? How did you ensure their integration into the live-action footage? They were all CG in the final film. On set we had a range of stuffies and mockups for actors to interact with and for lighting reference. People became quite attached to the baby creeper stuffies! For the Mama there was a head and large frame that was controlled by a team of puppeteers for eyeline and lighting reference. The ice cave has a very distinct visual style. How did you achieve the look of the ice, and what techniques were used to create the lighting and atmospheric effects inside the cave? I was sent to Iceland for a week to gather reference. I visited a range of ice cave locations—driving, hiking and being dropped by helicopter at various locations across a glacier. This reference provided the basis for the look of the caves. The ice was rendered fully refractive with interior volumes to create the structures. As it’s so computationally expensive to render we used tricks where we could reproject a limited number of fully rendered frames. This worked best on lock offs or small camera moves—others we just had to render full length. How were the scenes featuring multiple Mickeys filmed? Did you rely mostly on motion control, digital doubles, or a combination of techniques to seamlessly integrate the clones into the shots? For our shots it was mostly multiple plates relying on the skill of camera operators to match the framing and move and the comp work to either split frames or lift one of the Mickeys from a plate and replace the stand-in. Since Mickey’s clones are central to the story, what were the biggest VFX challenges in making them interact convincingly? Were there any specific techniques used to differentiate them visually or subtly show their progression over time? This really all came down to Robert Pattinson’s performances. He would usually be acting with his double for interaction and lighting. They would then switch positions and redo the performance. Robs could switch between the Mickey 17 and 18 characters with the assistance of quick hair and makeup changes. The prison environment seems to have a unique aesthetic and mood. How much of it was built practically, and how did VFX contribute to enhancing or extending the set? The foreground cells and storage containers were practical and everything beyond the fence was CG with a DMP overlay. The containers going off into the distance were carefully positioned and lit to enable you to feel the vast scale of the ship. We also replaced the fence in most shots with CG as it was easier than rotoing through the chain links. When Mickey is outside the ship, exposed to radiation, there are several extreme body effects, including his hand coming off. Can you discuss the challenges of creating these sequences, particularly in terms of digital prosthetics and damage simulations? Knocking Mickey’s hand off was quite straight forward due the speed of the impact. We started with a plate of the practical arm and glove and switch to a pre-sculpted CG glove and arm stump. The hand spinning off into the distance was hand animated to allow us to fully art direct the spin and trajectory. The final touch was to add and FX sim for the blood droplets. How did you balance realism and stylization in depicting the effects of radiation exposure? Were there real-world references or scientific studies that guided the look of the damage? Most of the radiation effects came from great make up and prosthetics—we just added some final touches such as an FX sim for a bursting blister. We tried a few different simulations based on work we had none on previous shows but ultimately dialed it back to something more subtle so it didn’t distract from the moment. Were there any memorable moments or scenes from the film that you found particularly rewarding or challenging to work on from a visual effects standpoint? There were a lot of quite diverse challenges. From creature work, environments, lava to a lot of ‘one off’ effects. The shot where the Creepers are pushing Mickey out into the snow was particularly challenging, with so many Creepers interacting with each other and Mickey, it took the combination of several animators and compositors to bring it together and integrate with the partial CG Mickey. Looking back on the project, what aspects of the visual effects are you most proud of? The baby creeper and the Ice cave environment. How long have you worked on this show? I worked on it for about 18 months What’s the VFX shots count? Framestore worked on 405 shots. A big thanks for your time. WANT TO KNOW MORE?Framestore: Dedicated page about Mickey 17 on Framestore website. © Vincent Frei – The Art of VFX – 2025 #mickey #stuart #penn #vfx #supervisor
    WWW.ARTOFVFX.COM
    Mickey 17: Stuart Penn – VFX Supervisor – Framestore
    Interviews Mickey 17: Stuart Penn – VFX Supervisor – Framestore By Vincent Frei - 27/05/2025 When we last spoke with Stuart Penn in 2019, he walked us through Framestore’s work on Avengers: Endgame. He has since added The Aeronauts, Moon Knight, 1899, and Flite to his impressive list of credits. How did you get involved on this show? Soon after we had been awarded work, Director Bong visited our London Studio in May 2022 to meet us and share his vision with us. How was the sequences made by Framestore? Framestore was responsible for the development of the Baby and Mama Creepers. We worked on the shots of the Baby Creepers within the ship, and the Creepers in the caves and the ice crevasse. We developed the ice cave and crevasse environments, including a full-CG shot of Mickey falling into the crevasse. Within the ship we were also responsible for the cycler room with its lava pit, the human printer, a range of set extensions, Marshall’s beautiful rock and—one of my personal favourites—Pigeon Man’s spinning eyes. We also crafted the spacewalk sequence. All the work came out of our London and Mumbai studios. Bong Joon Ho has a very distinct visual storytelling style. How did you collaborate with him to ensure the VFX aligned with his vision, and were there any unexpected creative challenges that pushed the team in new directions? Director Bong was fun to work with, very collaborative and had a very clear vision of where the film was going. We had discussions before and during the shoot. While we were shooting, Director Bong would talk to us about the backstory of what the Creepers might be thinking that went beyond the scope of what we would see in the movie. This really helped with giving the creatures character. Can you walk us through the design and animation process for the baby and mother creepers? What references or inspirations helped shape their look and movement? Director Bong had been working with his creature designer, Heechul Jang, for many months before production started. We had kickoffs with Director Bong and Heechul that provided us with some of the best and most thought out concepts I think we’ve ever received. Director Bong set us the challenge of bringing them to life. We took the lead on the Baby and Mama Creepers and DNEG took on the Juniors. It’s fun to note that the energy and inquisitive nature of the Babies was inspired by reference footage of puppies. Were these creatures primarily CG, or was there any practical element involved? How did you ensure their integration into the live-action footage? They were all CG in the final film. On set we had a range of stuffies and mockups for actors to interact with and for lighting reference. People became quite attached to the baby creeper stuffies! For the Mama there was a head and large frame that was controlled by a team of puppeteers for eyeline and lighting reference. The ice cave has a very distinct visual style. How did you achieve the look of the ice, and what techniques were used to create the lighting and atmospheric effects inside the cave? I was sent to Iceland for a week to gather reference. I visited a range of ice cave locations—driving, hiking and being dropped by helicopter at various locations across a glacier. This reference provided the basis for the look of the caves. The ice was rendered fully refractive with interior volumes to create the structures. As it’s so computationally expensive to render we used tricks where we could reproject a limited number of fully rendered frames. This worked best on lock offs or small camera moves—others we just had to render full length. How were the scenes featuring multiple Mickeys filmed? Did you rely mostly on motion control, digital doubles, or a combination of techniques to seamlessly integrate the clones into the shots? For our shots it was mostly multiple plates relying on the skill of camera operators to match the framing and move and the comp work to either split frames or lift one of the Mickeys from a plate and replace the stand-in. Since Mickey’s clones are central to the story, what were the biggest VFX challenges in making them interact convincingly? Were there any specific techniques used to differentiate them visually or subtly show their progression over time? This really all came down to Robert Pattinson’s performances. He would usually be acting with his double for interaction and lighting. They would then switch positions and redo the performance. Robs could switch between the Mickey 17 and 18 characters with the assistance of quick hair and makeup changes. The prison environment seems to have a unique aesthetic and mood. How much of it was built practically, and how did VFX contribute to enhancing or extending the set? The foreground cells and storage containers were practical and everything beyond the fence was CG with a DMP overlay. The containers going off into the distance were carefully positioned and lit to enable you to feel the vast scale of the ship. We also replaced the fence in most shots with CG as it was easier than rotoing through the chain links. When Mickey is outside the ship, exposed to radiation, there are several extreme body effects, including his hand coming off. Can you discuss the challenges of creating these sequences, particularly in terms of digital prosthetics and damage simulations? Knocking Mickey’s hand off was quite straight forward due the speed of the impact. We started with a plate of the practical arm and glove and switch to a pre-sculpted CG glove and arm stump. The hand spinning off into the distance was hand animated to allow us to fully art direct the spin and trajectory. The final touch was to add and FX sim for the blood droplets. How did you balance realism and stylization in depicting the effects of radiation exposure? Were there real-world references or scientific studies that guided the look of the damage? Most of the radiation effects came from great make up and prosthetics—we just added some final touches such as an FX sim for a bursting blister. We tried a few different simulations based on work we had none on previous shows but ultimately dialed it back to something more subtle so it didn’t distract from the moment. Were there any memorable moments or scenes from the film that you found particularly rewarding or challenging to work on from a visual effects standpoint? There were a lot of quite diverse challenges. From creature work, environments, lava to a lot of ‘one off’ effects. The shot where the Creepers are pushing Mickey out into the snow was particularly challenging, with so many Creepers interacting with each other and Mickey, it took the combination of several animators and compositors to bring it together and integrate with the partial CG Mickey. Looking back on the project, what aspects of the visual effects are you most proud of? The baby creeper and the Ice cave environment. How long have you worked on this show? I worked on it for about 18 months What’s the VFX shots count? Framestore worked on 405 shots. A big thanks for your time. WANT TO KNOW MORE?Framestore: Dedicated page about Mickey 17 on Framestore website. © Vincent Frei – The Art of VFX – 2025
    0 التعليقات 0 المشاركات