• FROM SET TO PIXELS: CINEMATIC ARTISTS COME TOGETHER TO CREATE POETRY

    By TREVOR HOGG

    Denis Villeneuvefinds the difficulty of working with visual effects are sometimes the intermediaries between him and the artists and therefore the need to be precise with directions to keep things on track.If post-production has any chance of going smoothly, there must be a solid on-set relationship between the director, cinematographer and visual effects supervisor. “It’s my job to have a vision and to bring it to the screen,” notes Denis Villeneuve, director of Dune: Part Two. “That’s why working with visual effects requires a lot of discipline. It’s not like you work with a keyboard and can change your mind all the time. When I work with a camera, I commit to a mise-en-scène. I’m trying to take the risk, move forward in one direction and enhance it with visual effects. I push it until it looks perfect. It takes a tremendous amount of time and preparation.Paul Lambert is a perfectionist, and I love that about him. We will never put a shot on the screen that we don’t feel has a certain level of quality. It needs to look as real as the face of my actor.”

    A legendary cinematographer had a significant influence on how Villeneuve approaches digital augmentation. “Someone I have learned a lot from about visual effects isRoger Deakins. I remember that at the beginning, when I was doing Blade Runner 2049, some artwork was not defined enough, and I was like, ‘I will correct that later.’ Roger said, ‘No. Don’t do that. You have to make sure right at the start.’ I’ve learned the hard way that you need to be as precise as you can, otherwise it goes in a lot of directions.”

    Motion capture is visually jarring because your eye is always drawn to the performer in the mocap suit, but it worked out well on Better Man because the same thing happens when he gets replaced by a CG monkey.Visual effects enabled the atmospherics on Wolfs to be art directed, which is not always possible with practical snow.One of the most complex musical numbers in Better Man is “Rock DJ,” which required LiDAR scans of Regent Street and doing full 3D motion capture with the dancers dancing down the whole length of the street to work out how best to shoot it.Cinematographer Dan Mindel favors on-set practical effects because the reactions from the cast come across as being more genuine, which was the case for Twisters.Storyboards are an essential part of the planning process. “When I finish a screenplay, the first thing I do is to storyboard, not just to define the visual element of the movie, but also to rewrite the movie through images,” Villeneuve explains. “Those storyboards inform my crew about the design, costumes, accessories and vehicles, andcreate a visual inner rhythm of the film. This is the first step towards visual effects where there will be a conversation that will start from the boards. That will be translated into previs to help the animators know where we are going because the movie has to be made in a certain timeframe and needs choreography to make sure everybody is moving in the same direction.” The approach towards filmmaking has not changed over the years. “You have a camera and a couple of actors in front of you, and it’s about finding the right angle; the rest is noise. I try to protect the intimacy around the camera as much as possible and focus on that because if you don’t believe the actor, then you won’t believe anything.”

    Before transforming singer Robbie Williams into a CG primate, Michael Gracey started as a visual effects artist. “I feel so fortu- nate to have come from a visual effects background early on in my career,” recalls Michael Gracey, director of Better Man. “I would sit down and do all the post myself because I didn’t trust anyone to care as much as I did. Fortunately, over the years I’ve met people who do. It’s a huge part of how I even scrapbook ideas together. Early on, I was constantly throwing stuff up in Flame, doing a video test and asking, ‘Is this going to work?’ Jumping into 3D was something I felt comfortable doing. I’ve been able to plan out or previs ideas. It’s an amazing tool to be armed with if you are a director and have big ideas and you’re trying to convey them to a lot of people.” Previs was pivotal in getting Better Man financed. “Off the page, people were like, ‘Is this monkey even going to work?’ Then they were worried that it wouldn’t work in a musical number. We showed them the previs for Feel, the first musical number, and My Way at the end of the film. I would say, ‘If you get any kind of emotion watching these musical numbers, just imagine what it’s going to be like when it’s filmed and is photoreal.”

    Several shots had to be stitched together to create a ‘oner’ that features numerous costume changes and 500 dancers. “For Rock DJ, we were doing LiDAR scans of Regent Street and full 3D motion capture with the dancers dancing down the whole length of the street to work out all of the transition points and how best to shoot it,” Gracey states. “That process involved Erik Wilson, the Cinematographer; Luke Millar, the Visual Effects Supervisor; Ashley Wallen, the Choreographer; and Patrick Correll, Co-Producer. Patrick would sit on set and, in DaVinci Resolve, take the feed from the camera and check every take against the blueprint that we had already previs.” Motion capture is visually jarring to shoot. “Everything that is in-camera looks perfect, then a guy walks in wearing a mocap suit and your eye zooms onto him. But the truth is, your eye does that the moment you replace him with a monkey as well. It worked out quite well because that idea is true to what it is to be famous. A famous person walks into the room and your eye immediately goes to them.”

    Digital effects have had a significant impact on a particular area of filmmaking. “Physical effects were a much higher art form than it is now, or it was allowed to be then than it is now,” notes Dan Mindel, Cinematographer on Twisters. “People will decline a real pyrotechnic explosion and do a digital one. But you get a much bigger reaction when there’s actual noise and flash.” It is all about collaboration. Mindel explains, “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys, because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world. When we made Twister, it was an analog movie with digital effects, and it worked great. That’s because everyone on set doing the technical work understood both formats, and we were able to use them well.”

    Digital filmmaking has caused a generational gap. “The younger directors don’t think holistically,” Mindel notes. “It’s much more post-driven because they want to manipulate on the Avid or whatever platform it is going to be. What has happened is that the overreaching nature of these tools has left very little to the imagination. A movie that is heavy visual effects is mostly conceptualized on paper using computer-generated graphics and color; that insidiously sneaks into the look and feel of the movie before you know it. You see concept art blasted all over production offices. People could get used to looking at those images, and before you know it, that’s how the movie looks. That’s a very dangerous place to be, not to have the imagination to work around an issue that perhaps doesn’t manifest itself until you’re shooting.” There has to be a sense of purpose. Mindel remarks, “The ability to shoot in a way that doesn’t allow any manipulation in post is the only way to guarantee that there’s just one direction the look can go in. But that could be a little dangerous for some people. Generally, the crowd I’m working with is part of a team, and there’s little thought of taking the movie to a different place than what was shot. I work in the DI with the visual effects supervisor, and we look at our work together so we’re all in agreement that it fits into the movie.”

    “All of the advances in technology are a push for greater control,” notes Larkin Seiple, Cinematographer on Everything Everywhere All at Once. “There are still a lot of things that we do with visual effects that we could do practically, but a lot of times it’s more efficient, or we have more attempts at it later in post, than if we had tried to do it practically. I find today, there’s still a debate about what we do on set and what we do later digitally. Many directors have been trying to do more on set, and the best visual effects supervisors I work with push to do everything in-camera as much as possible to make it as realistic as possible.” Storytelling is about figuring out where to invest your time and effort. Seiple states, “I like the adventure of filmmaking. I prefer to go to a mountain top and shoot some of the scenes, get there and be inspired, as opposed to recreate it. Now, if it’s a five-second cutaway, I don’t want production to go to a mountain top and do that. For car work, we’ll shoot the real streets, figure out the time of day and even light the plates for it. Then, I’ll project those on LED walls with actors in a car on a stage. I love doing that because then I get to control how that looks.”

    Visual effects have freed Fallout Cinematographer Stuart Dryburgh to shoot quicker and in places that in the past would have been deemed imperfect because of power lines, out-of-period buildings or the sky.Visual effects assist in achieving the desired atmospherics. Seiple says, “On Wolfs, we tried to bring in our own snow for every scene. We would shoot one take, the snow would blow left, and the next take would blow right. Janek Sirrs is probably the best visual effects supervisor I’ve worked with, and he was like, ‘Please turn off the snow. It’ll be a nightmare trying to remove the snow from all these shots then add our own snow back for continuity because you can’t have the snow changing direction every other cut.’ Or we’d have to ‘snow’ a street, which would take ages. Janek would say, ‘Let’s put enough snow on the ground to see the lighting on it and where the actors walk. We’ll do the rest of the street later because we have a perfect reference of what it should look like.” Certain photographic principles have to be carried over into post-production to make shots believable to the eye. Seiple explains, “When you make all these amazing details that should be out of focus sharper, then the image feels like a visual effect because it doesn’t work the way a lens would work.” Familiarity with the visual effects process is an asset in being able to achieve the best result. “I inadvertently come from a lot of visual effect-heavy shoots and shows, so I’m quick to have an opinion about it. Many directors love to reference the way David Fincher uses visual effects because there is such great behind-the-scenes imagery that showcases how they were able to do simple things. Also, I like to shoot tests even on an iPhone to see if this comp will work or if this idea is a good one.”

    Cinematographer Fabian Wagner and VFX Supervisor John Moffatt spent a lot of time in pre-production for Venom: The Last Dance discussing how to bring out the texture of the symbiote through lighting and camera angles.Game of Thrones Director of Photography Fabian Wagner had to make key decisions while prepping and breaking down the script so visual effects had enough time to meet deadline.Twisters was an analog movie with digital effects that worked well because everyone on set doing the technical work understood both formats.For Cinematographer Larkin Seiple, storytelling is about figuring out where to invest your time and effort. Scene from the Netflix series Beef.Cinematographer Larkin Seiple believes that all of the advances in technology are a push for greater control, which occurred on Everything Everywhere All at Once.Nothing beats reality when it comes to realism. “Every project I do I talk more about the real elements to bring into the shoot than the visual effect element because the more practical stuff that you can do on set, the more it will embed the visual effects into the image, and, therefore, they’re more real,” observes Fabian Wagner, Cinematographer on Venom: The Last Dance. “It also depends on the job you’re doing in terms of how real or unreal you want it to be. Game of Thrones was a good example because it was a visual effects-heavy show, but they were keen on pushing the reality of things as much as possible. We were doing interactive lighting and practical on-set things to embed the visual effects. It was successful.” Television has a significantly compressed schedule compared to feature films. “There are fewer times to iterate. You have to be much more precise. On Game of Thrones, we knew that certain decisions had to be made early on while we were still prepping and breaking down the script. Because of their due dates, to be ready in time, they had to start the visual effects process for certain dragon scenes months before we even started shooting.”

    “Like everything else, it’s always about communication,” Wagner notes. “I’ve been fortunate to work with extremely talented and collaborative visual effects supervisors, visual effects producers and directors. I have become friends with most of those visual effects departments throughout the shoot, so it’s easy to stay in touch. Even when Venom: The Last Dance was posting, I would be talking to John Moffatt, who was our talented visual effects supervisor. We would exchange emails, text messages or phone calls once a week, and he would send me updates, which we would talk about it. If I gave any notes or thoughts, John would listen, and if it were possible to do anything about, he would. In the end, it’s about those personal relationships, and if you have those, that can go a long way.” Wagner has had to deal with dragons, superheroes and symbiotes. “They’re all the same to me! For the symbiote, we had two previous films to see what they had done, where they had succeeded and where we could improve it slightly. While prepping, John and I spent a lot of time talking about how to bring out the texture of the symbiote and help it with the lighting and camera angles. One of the earliest tests was to see what would happen if we backlit or side lit it as well as trying different textures for reflections. We came up with something we all were happy with, and that’s what we did on set. It was down to trying to speak the same language and aiming for the same thing, which in this case was, ‘How could we make the symbiote look the coolest?’”

    Visual effects has become a crucial department throughout the filmmaking process. “The relationship with the visual effects supervisor is new,” states Stuart Dryburgh, Cinematographer on Fallout. “We didn’t really have that. On The Piano, the extent of the visual effects was having somebody scribbling in a lightning strike over a stormy sky and a little flash of an animated puppet. Runaway Bride had a two-camera setup where one of the cameras pushed into the frame, and that was digitally removed, but we weren’t using it the way we’re using it now. ForEast of Eden, we’re recreating 19th and early 20th century Connecticut, Boston and Salinas, California in New Zealand. While we have some great sets built and historical buildings that we can use, there is a lot of set extension and modification, and some complete bluescreen scenes, which allow us to more realistically portray a historical environment than we could have done back in the day.” The presence of a visual effects supervisor simplified principal photography. Dryburgh adds, “In many ways, using visual effects frees you to shoot quicker and in places that might otherwise be deemed imperfect because of one little thing, whether it’s power lines or out-of-period buildings or sky. All of those can be easily fixed. Most of us have been doing it for long enough that we have a good idea of what can and can’t be done and how it’s done so that the visual effects supervisor isn’t the arbiter.”

    Lighting cannot be arbitrarily altered in post as it never looks right. “Whether you set the lighting on the set and the background artist has to match that, or you have an existing background and you, as a DP, have to match that – that is the lighting trick to the whole thing,” Dryburgh observes. “Everything has to be the same, a soft or hard light, the direction and color. Those things all need to line up in a composited shot; that is crucial.” Every director has his or her own approach to filmmaking. “Harold Ramis told me, ‘I’ll deal with the acting and the words. You just make it look nice, alright?’ That’s the conversation we had about shots, and it worked out well.Garth Davis, who I’m working with now, is a terrific photographer in his own right and has a great visual sense, so he’s much more involved in anything visual, whether it be the designs of the sets, creation of the visual effects, my lighting or choice of lenses. It becomes much more collaborative. And that applies to the visual effects department as well.” Recreating vintage lenses digitally is an important part of the visual aesthetic. “As digital photography has become crisper, better and sharper, people have chosen to use fewer perfect optics, such as lenses that are softer on the edges or give a flare characteristic. Before production, we have the camera department shoot all of these lens grids of different packages and ranges, and visual effects takes that information so they can model every lens. If they’re doing a fully CG background, they can apply that lens characteristic,” remarks Dryburgh.

    Television schedules for productions like House of the Dragon do not allow a lot of time to iterate, so decisions have to be precise.Bluescreen and stunt doubles on Twisters.“The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world.”
    —Dan Mindel, Cinematographer, Twisters

    Cinematographers like Greig Fraser have adopted Unreal Engine. “Greig has an incredible curiosity about new technology, and that helped us specifically with Dune: Part Two,” Villeneuve explains. “Greig was using Unreal Engine to capture natural environments. For example, if we decide to shoot in that specific rocky area, we’ll capture the whole area with drones to recreate the terrain in the computer. If I said, ‘I want to shoot in that valley on November 3rd and have the sun behind the actors. At what time is it? You have to be there at 9:45 am.’ We built the whole schedule like a puzzle to maximize the power of natural light, but that came through those studies, which were made with the software usually used for video games.” Technology is essentially a tool that keeps evolving. Villeneuve adds, “Sometimes, I don’t know if I feel like a dinosaur or if my last movie will be done in this house behind the computer alone. It would be much less tiring to do that, but seriously, the beauty of cinema is the idea of bringing many artists together to create poetry.”
    #set #pixels #cinematic #artists #come
    FROM SET TO PIXELS: CINEMATIC ARTISTS COME TOGETHER TO CREATE POETRY
    By TREVOR HOGG Denis Villeneuvefinds the difficulty of working with visual effects are sometimes the intermediaries between him and the artists and therefore the need to be precise with directions to keep things on track.If post-production has any chance of going smoothly, there must be a solid on-set relationship between the director, cinematographer and visual effects supervisor. “It’s my job to have a vision and to bring it to the screen,” notes Denis Villeneuve, director of Dune: Part Two. “That’s why working with visual effects requires a lot of discipline. It’s not like you work with a keyboard and can change your mind all the time. When I work with a camera, I commit to a mise-en-scène. I’m trying to take the risk, move forward in one direction and enhance it with visual effects. I push it until it looks perfect. It takes a tremendous amount of time and preparation.Paul Lambert is a perfectionist, and I love that about him. We will never put a shot on the screen that we don’t feel has a certain level of quality. It needs to look as real as the face of my actor.” A legendary cinematographer had a significant influence on how Villeneuve approaches digital augmentation. “Someone I have learned a lot from about visual effects isRoger Deakins. I remember that at the beginning, when I was doing Blade Runner 2049, some artwork was not defined enough, and I was like, ‘I will correct that later.’ Roger said, ‘No. Don’t do that. You have to make sure right at the start.’ I’ve learned the hard way that you need to be as precise as you can, otherwise it goes in a lot of directions.” Motion capture is visually jarring because your eye is always drawn to the performer in the mocap suit, but it worked out well on Better Man because the same thing happens when he gets replaced by a CG monkey.Visual effects enabled the atmospherics on Wolfs to be art directed, which is not always possible with practical snow.One of the most complex musical numbers in Better Man is “Rock DJ,” which required LiDAR scans of Regent Street and doing full 3D motion capture with the dancers dancing down the whole length of the street to work out how best to shoot it.Cinematographer Dan Mindel favors on-set practical effects because the reactions from the cast come across as being more genuine, which was the case for Twisters.Storyboards are an essential part of the planning process. “When I finish a screenplay, the first thing I do is to storyboard, not just to define the visual element of the movie, but also to rewrite the movie through images,” Villeneuve explains. “Those storyboards inform my crew about the design, costumes, accessories and vehicles, andcreate a visual inner rhythm of the film. This is the first step towards visual effects where there will be a conversation that will start from the boards. That will be translated into previs to help the animators know where we are going because the movie has to be made in a certain timeframe and needs choreography to make sure everybody is moving in the same direction.” The approach towards filmmaking has not changed over the years. “You have a camera and a couple of actors in front of you, and it’s about finding the right angle; the rest is noise. I try to protect the intimacy around the camera as much as possible and focus on that because if you don’t believe the actor, then you won’t believe anything.” Before transforming singer Robbie Williams into a CG primate, Michael Gracey started as a visual effects artist. “I feel so fortu- nate to have come from a visual effects background early on in my career,” recalls Michael Gracey, director of Better Man. “I would sit down and do all the post myself because I didn’t trust anyone to care as much as I did. Fortunately, over the years I’ve met people who do. It’s a huge part of how I even scrapbook ideas together. Early on, I was constantly throwing stuff up in Flame, doing a video test and asking, ‘Is this going to work?’ Jumping into 3D was something I felt comfortable doing. I’ve been able to plan out or previs ideas. It’s an amazing tool to be armed with if you are a director and have big ideas and you’re trying to convey them to a lot of people.” Previs was pivotal in getting Better Man financed. “Off the page, people were like, ‘Is this monkey even going to work?’ Then they were worried that it wouldn’t work in a musical number. We showed them the previs for Feel, the first musical number, and My Way at the end of the film. I would say, ‘If you get any kind of emotion watching these musical numbers, just imagine what it’s going to be like when it’s filmed and is photoreal.” Several shots had to be stitched together to create a ‘oner’ that features numerous costume changes and 500 dancers. “For Rock DJ, we were doing LiDAR scans of Regent Street and full 3D motion capture with the dancers dancing down the whole length of the street to work out all of the transition points and how best to shoot it,” Gracey states. “That process involved Erik Wilson, the Cinematographer; Luke Millar, the Visual Effects Supervisor; Ashley Wallen, the Choreographer; and Patrick Correll, Co-Producer. Patrick would sit on set and, in DaVinci Resolve, take the feed from the camera and check every take against the blueprint that we had already previs.” Motion capture is visually jarring to shoot. “Everything that is in-camera looks perfect, then a guy walks in wearing a mocap suit and your eye zooms onto him. But the truth is, your eye does that the moment you replace him with a monkey as well. It worked out quite well because that idea is true to what it is to be famous. A famous person walks into the room and your eye immediately goes to them.” Digital effects have had a significant impact on a particular area of filmmaking. “Physical effects were a much higher art form than it is now, or it was allowed to be then than it is now,” notes Dan Mindel, Cinematographer on Twisters. “People will decline a real pyrotechnic explosion and do a digital one. But you get a much bigger reaction when there’s actual noise and flash.” It is all about collaboration. Mindel explains, “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys, because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world. When we made Twister, it was an analog movie with digital effects, and it worked great. That’s because everyone on set doing the technical work understood both formats, and we were able to use them well.” Digital filmmaking has caused a generational gap. “The younger directors don’t think holistically,” Mindel notes. “It’s much more post-driven because they want to manipulate on the Avid or whatever platform it is going to be. What has happened is that the overreaching nature of these tools has left very little to the imagination. A movie that is heavy visual effects is mostly conceptualized on paper using computer-generated graphics and color; that insidiously sneaks into the look and feel of the movie before you know it. You see concept art blasted all over production offices. People could get used to looking at those images, and before you know it, that’s how the movie looks. That’s a very dangerous place to be, not to have the imagination to work around an issue that perhaps doesn’t manifest itself until you’re shooting.” There has to be a sense of purpose. Mindel remarks, “The ability to shoot in a way that doesn’t allow any manipulation in post is the only way to guarantee that there’s just one direction the look can go in. But that could be a little dangerous for some people. Generally, the crowd I’m working with is part of a team, and there’s little thought of taking the movie to a different place than what was shot. I work in the DI with the visual effects supervisor, and we look at our work together so we’re all in agreement that it fits into the movie.” “All of the advances in technology are a push for greater control,” notes Larkin Seiple, Cinematographer on Everything Everywhere All at Once. “There are still a lot of things that we do with visual effects that we could do practically, but a lot of times it’s more efficient, or we have more attempts at it later in post, than if we had tried to do it practically. I find today, there’s still a debate about what we do on set and what we do later digitally. Many directors have been trying to do more on set, and the best visual effects supervisors I work with push to do everything in-camera as much as possible to make it as realistic as possible.” Storytelling is about figuring out where to invest your time and effort. Seiple states, “I like the adventure of filmmaking. I prefer to go to a mountain top and shoot some of the scenes, get there and be inspired, as opposed to recreate it. Now, if it’s a five-second cutaway, I don’t want production to go to a mountain top and do that. For car work, we’ll shoot the real streets, figure out the time of day and even light the plates for it. Then, I’ll project those on LED walls with actors in a car on a stage. I love doing that because then I get to control how that looks.” Visual effects have freed Fallout Cinematographer Stuart Dryburgh to shoot quicker and in places that in the past would have been deemed imperfect because of power lines, out-of-period buildings or the sky.Visual effects assist in achieving the desired atmospherics. Seiple says, “On Wolfs, we tried to bring in our own snow for every scene. We would shoot one take, the snow would blow left, and the next take would blow right. Janek Sirrs is probably the best visual effects supervisor I’ve worked with, and he was like, ‘Please turn off the snow. It’ll be a nightmare trying to remove the snow from all these shots then add our own snow back for continuity because you can’t have the snow changing direction every other cut.’ Or we’d have to ‘snow’ a street, which would take ages. Janek would say, ‘Let’s put enough snow on the ground to see the lighting on it and where the actors walk. We’ll do the rest of the street later because we have a perfect reference of what it should look like.” Certain photographic principles have to be carried over into post-production to make shots believable to the eye. Seiple explains, “When you make all these amazing details that should be out of focus sharper, then the image feels like a visual effect because it doesn’t work the way a lens would work.” Familiarity with the visual effects process is an asset in being able to achieve the best result. “I inadvertently come from a lot of visual effect-heavy shoots and shows, so I’m quick to have an opinion about it. Many directors love to reference the way David Fincher uses visual effects because there is such great behind-the-scenes imagery that showcases how they were able to do simple things. Also, I like to shoot tests even on an iPhone to see if this comp will work or if this idea is a good one.” Cinematographer Fabian Wagner and VFX Supervisor John Moffatt spent a lot of time in pre-production for Venom: The Last Dance discussing how to bring out the texture of the symbiote through lighting and camera angles.Game of Thrones Director of Photography Fabian Wagner had to make key decisions while prepping and breaking down the script so visual effects had enough time to meet deadline.Twisters was an analog movie with digital effects that worked well because everyone on set doing the technical work understood both formats.For Cinematographer Larkin Seiple, storytelling is about figuring out where to invest your time and effort. Scene from the Netflix series Beef.Cinematographer Larkin Seiple believes that all of the advances in technology are a push for greater control, which occurred on Everything Everywhere All at Once.Nothing beats reality when it comes to realism. “Every project I do I talk more about the real elements to bring into the shoot than the visual effect element because the more practical stuff that you can do on set, the more it will embed the visual effects into the image, and, therefore, they’re more real,” observes Fabian Wagner, Cinematographer on Venom: The Last Dance. “It also depends on the job you’re doing in terms of how real or unreal you want it to be. Game of Thrones was a good example because it was a visual effects-heavy show, but they were keen on pushing the reality of things as much as possible. We were doing interactive lighting and practical on-set things to embed the visual effects. It was successful.” Television has a significantly compressed schedule compared to feature films. “There are fewer times to iterate. You have to be much more precise. On Game of Thrones, we knew that certain decisions had to be made early on while we were still prepping and breaking down the script. Because of their due dates, to be ready in time, they had to start the visual effects process for certain dragon scenes months before we even started shooting.” “Like everything else, it’s always about communication,” Wagner notes. “I’ve been fortunate to work with extremely talented and collaborative visual effects supervisors, visual effects producers and directors. I have become friends with most of those visual effects departments throughout the shoot, so it’s easy to stay in touch. Even when Venom: The Last Dance was posting, I would be talking to John Moffatt, who was our talented visual effects supervisor. We would exchange emails, text messages or phone calls once a week, and he would send me updates, which we would talk about it. If I gave any notes or thoughts, John would listen, and if it were possible to do anything about, he would. In the end, it’s about those personal relationships, and if you have those, that can go a long way.” Wagner has had to deal with dragons, superheroes and symbiotes. “They’re all the same to me! For the symbiote, we had two previous films to see what they had done, where they had succeeded and where we could improve it slightly. While prepping, John and I spent a lot of time talking about how to bring out the texture of the symbiote and help it with the lighting and camera angles. One of the earliest tests was to see what would happen if we backlit or side lit it as well as trying different textures for reflections. We came up with something we all were happy with, and that’s what we did on set. It was down to trying to speak the same language and aiming for the same thing, which in this case was, ‘How could we make the symbiote look the coolest?’” Visual effects has become a crucial department throughout the filmmaking process. “The relationship with the visual effects supervisor is new,” states Stuart Dryburgh, Cinematographer on Fallout. “We didn’t really have that. On The Piano, the extent of the visual effects was having somebody scribbling in a lightning strike over a stormy sky and a little flash of an animated puppet. Runaway Bride had a two-camera setup where one of the cameras pushed into the frame, and that was digitally removed, but we weren’t using it the way we’re using it now. ForEast of Eden, we’re recreating 19th and early 20th century Connecticut, Boston and Salinas, California in New Zealand. While we have some great sets built and historical buildings that we can use, there is a lot of set extension and modification, and some complete bluescreen scenes, which allow us to more realistically portray a historical environment than we could have done back in the day.” The presence of a visual effects supervisor simplified principal photography. Dryburgh adds, “In many ways, using visual effects frees you to shoot quicker and in places that might otherwise be deemed imperfect because of one little thing, whether it’s power lines or out-of-period buildings or sky. All of those can be easily fixed. Most of us have been doing it for long enough that we have a good idea of what can and can’t be done and how it’s done so that the visual effects supervisor isn’t the arbiter.” Lighting cannot be arbitrarily altered in post as it never looks right. “Whether you set the lighting on the set and the background artist has to match that, or you have an existing background and you, as a DP, have to match that – that is the lighting trick to the whole thing,” Dryburgh observes. “Everything has to be the same, a soft or hard light, the direction and color. Those things all need to line up in a composited shot; that is crucial.” Every director has his or her own approach to filmmaking. “Harold Ramis told me, ‘I’ll deal with the acting and the words. You just make it look nice, alright?’ That’s the conversation we had about shots, and it worked out well.Garth Davis, who I’m working with now, is a terrific photographer in his own right and has a great visual sense, so he’s much more involved in anything visual, whether it be the designs of the sets, creation of the visual effects, my lighting or choice of lenses. It becomes much more collaborative. And that applies to the visual effects department as well.” Recreating vintage lenses digitally is an important part of the visual aesthetic. “As digital photography has become crisper, better and sharper, people have chosen to use fewer perfect optics, such as lenses that are softer on the edges or give a flare characteristic. Before production, we have the camera department shoot all of these lens grids of different packages and ranges, and visual effects takes that information so they can model every lens. If they’re doing a fully CG background, they can apply that lens characteristic,” remarks Dryburgh. Television schedules for productions like House of the Dragon do not allow a lot of time to iterate, so decisions have to be precise.Bluescreen and stunt doubles on Twisters.“The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world.” —Dan Mindel, Cinematographer, Twisters Cinematographers like Greig Fraser have adopted Unreal Engine. “Greig has an incredible curiosity about new technology, and that helped us specifically with Dune: Part Two,” Villeneuve explains. “Greig was using Unreal Engine to capture natural environments. For example, if we decide to shoot in that specific rocky area, we’ll capture the whole area with drones to recreate the terrain in the computer. If I said, ‘I want to shoot in that valley on November 3rd and have the sun behind the actors. At what time is it? You have to be there at 9:45 am.’ We built the whole schedule like a puzzle to maximize the power of natural light, but that came through those studies, which were made with the software usually used for video games.” Technology is essentially a tool that keeps evolving. Villeneuve adds, “Sometimes, I don’t know if I feel like a dinosaur or if my last movie will be done in this house behind the computer alone. It would be much less tiring to do that, but seriously, the beauty of cinema is the idea of bringing many artists together to create poetry.” #set #pixels #cinematic #artists #come
    WWW.VFXVOICE.COM
    FROM SET TO PIXELS: CINEMATIC ARTISTS COME TOGETHER TO CREATE POETRY
    By TREVOR HOGG Denis Villeneuve (Dune: Part Two) finds the difficulty of working with visual effects are sometimes the intermediaries between him and the artists and therefore the need to be precise with directions to keep things on track. (Image courtesy of Warner Bros. Pictures) If post-production has any chance of going smoothly, there must be a solid on-set relationship between the director, cinematographer and visual effects supervisor. “It’s my job to have a vision and to bring it to the screen,” notes Denis Villeneuve, director of Dune: Part Two. “That’s why working with visual effects requires a lot of discipline. It’s not like you work with a keyboard and can change your mind all the time. When I work with a camera, I commit to a mise-en-scène. I’m trying to take the risk, move forward in one direction and enhance it with visual effects. I push it until it looks perfect. It takes a tremendous amount of time and preparation. [VFX Supervisor] Paul Lambert is a perfectionist, and I love that about him. We will never put a shot on the screen that we don’t feel has a certain level of quality. It needs to look as real as the face of my actor.” A legendary cinematographer had a significant influence on how Villeneuve approaches digital augmentation. “Someone I have learned a lot from about visual effects is [Cinematographer] Roger Deakins. I remember that at the beginning, when I was doing Blade Runner 2049, some artwork was not defined enough, and I was like, ‘I will correct that later.’ Roger said, ‘No. Don’t do that. You have to make sure right at the start.’ I’ve learned the hard way that you need to be as precise as you can, otherwise it goes in a lot of directions.” Motion capture is visually jarring because your eye is always drawn to the performer in the mocap suit, but it worked out well on Better Man because the same thing happens when he gets replaced by a CG monkey. (Image courtesy of Paramount Pictures) Visual effects enabled the atmospherics on Wolfs to be art directed, which is not always possible with practical snow. (Image courtesy of Apple Studios) One of the most complex musical numbers in Better Man is “Rock DJ,” which required LiDAR scans of Regent Street and doing full 3D motion capture with the dancers dancing down the whole length of the street to work out how best to shoot it. (Image courtesy of Paramount Pictures) Cinematographer Dan Mindel favors on-set practical effects because the reactions from the cast come across as being more genuine, which was the case for Twisters. (Image courtesy of Universal Pictures) Storyboards are an essential part of the planning process. “When I finish a screenplay, the first thing I do is to storyboard, not just to define the visual element of the movie, but also to rewrite the movie through images,” Villeneuve explains. “Those storyboards inform my crew about the design, costumes, accessories and vehicles, and [they] create a visual inner rhythm of the film. This is the first step towards visual effects where there will be a conversation that will start from the boards. That will be translated into previs to help the animators know where we are going because the movie has to be made in a certain timeframe and needs choreography to make sure everybody is moving in the same direction.” The approach towards filmmaking has not changed over the years. “You have a camera and a couple of actors in front of you, and it’s about finding the right angle; the rest is noise. I try to protect the intimacy around the camera as much as possible and focus on that because if you don’t believe the actor, then you won’t believe anything.” Before transforming singer Robbie Williams into a CG primate, Michael Gracey started as a visual effects artist. “I feel so fortu- nate to have come from a visual effects background early on in my career,” recalls Michael Gracey, director of Better Man. “I would sit down and do all the post myself because I didn’t trust anyone to care as much as I did. Fortunately, over the years I’ve met people who do. It’s a huge part of how I even scrapbook ideas together. Early on, I was constantly throwing stuff up in Flame, doing a video test and asking, ‘Is this going to work?’ Jumping into 3D was something I felt comfortable doing. I’ve been able to plan out or previs ideas. It’s an amazing tool to be armed with if you are a director and have big ideas and you’re trying to convey them to a lot of people.” Previs was pivotal in getting Better Man financed. “Off the page, people were like, ‘Is this monkey even going to work?’ Then they were worried that it wouldn’t work in a musical number. We showed them the previs for Feel, the first musical number, and My Way at the end of the film. I would say, ‘If you get any kind of emotion watching these musical numbers, just imagine what it’s going to be like when it’s filmed and is photoreal.” Several shots had to be stitched together to create a ‘oner’ that features numerous costume changes and 500 dancers. “For Rock DJ, we were doing LiDAR scans of Regent Street and full 3D motion capture with the dancers dancing down the whole length of the street to work out all of the transition points and how best to shoot it,” Gracey states. “That process involved Erik Wilson, the Cinematographer; Luke Millar, the Visual Effects Supervisor; Ashley Wallen, the Choreographer; and Patrick Correll, Co-Producer. Patrick would sit on set and, in DaVinci Resolve, take the feed from the camera and check every take against the blueprint that we had already previs.” Motion capture is visually jarring to shoot. “Everything that is in-camera looks perfect, then a guy walks in wearing a mocap suit and your eye zooms onto him. But the truth is, your eye does that the moment you replace him with a monkey as well. It worked out quite well because that idea is true to what it is to be famous. A famous person walks into the room and your eye immediately goes to them.” Digital effects have had a significant impact on a particular area of filmmaking. “Physical effects were a much higher art form than it is now, or it was allowed to be then than it is now,” notes Dan Mindel, Cinematographer on Twisters. “People will decline a real pyrotechnic explosion and do a digital one. But you get a much bigger reaction when there’s actual noise and flash.” It is all about collaboration. Mindel explains, “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys, because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world. When we made Twister, it was an analog movie with digital effects, and it worked great. That’s because everyone on set doing the technical work understood both formats, and we were able to use them well.” Digital filmmaking has caused a generational gap. “The younger directors don’t think holistically,” Mindel notes. “It’s much more post-driven because they want to manipulate on the Avid or whatever platform it is going to be. What has happened is that the overreaching nature of these tools has left very little to the imagination. A movie that is heavy visual effects is mostly conceptualized on paper using computer-generated graphics and color; that insidiously sneaks into the look and feel of the movie before you know it. You see concept art blasted all over production offices. People could get used to looking at those images, and before you know it, that’s how the movie looks. That’s a very dangerous place to be, not to have the imagination to work around an issue that perhaps doesn’t manifest itself until you’re shooting.” There has to be a sense of purpose. Mindel remarks, “The ability to shoot in a way that doesn’t allow any manipulation in post is the only way to guarantee that there’s just one direction the look can go in. But that could be a little dangerous for some people. Generally, the crowd I’m working with is part of a team, and there’s little thought of taking the movie to a different place than what was shot. I work in the DI with the visual effects supervisor, and we look at our work together so we’re all in agreement that it fits into the movie.” “All of the advances in technology are a push for greater control,” notes Larkin Seiple, Cinematographer on Everything Everywhere All at Once. “There are still a lot of things that we do with visual effects that we could do practically, but a lot of times it’s more efficient, or we have more attempts at it later in post, than if we had tried to do it practically. I find today, there’s still a debate about what we do on set and what we do later digitally. Many directors have been trying to do more on set, and the best visual effects supervisors I work with push to do everything in-camera as much as possible to make it as realistic as possible.” Storytelling is about figuring out where to invest your time and effort. Seiple states, “I like the adventure of filmmaking. I prefer to go to a mountain top and shoot some of the scenes, get there and be inspired, as opposed to recreate it. Now, if it’s a five-second cutaway, I don’t want production to go to a mountain top and do that. For car work, we’ll shoot the real streets, figure out the time of day and even light the plates for it. Then, I’ll project those on LED walls with actors in a car on a stage. I love doing that because then I get to control how that looks.” Visual effects have freed Fallout Cinematographer Stuart Dryburgh to shoot quicker and in places that in the past would have been deemed imperfect because of power lines, out-of-period buildings or the sky. (Image courtesy of Prime Video) Visual effects assist in achieving the desired atmospherics. Seiple says, “On Wolfs, we tried to bring in our own snow for every scene. We would shoot one take, the snow would blow left, and the next take would blow right. Janek Sirrs is probably the best visual effects supervisor I’ve worked with, and he was like, ‘Please turn off the snow. It’ll be a nightmare trying to remove the snow from all these shots then add our own snow back for continuity because you can’t have the snow changing direction every other cut.’ Or we’d have to ‘snow’ a street, which would take ages. Janek would say, ‘Let’s put enough snow on the ground to see the lighting on it and where the actors walk. We’ll do the rest of the street later because we have a perfect reference of what it should look like.” Certain photographic principles have to be carried over into post-production to make shots believable to the eye. Seiple explains, “When you make all these amazing details that should be out of focus sharper, then the image feels like a visual effect because it doesn’t work the way a lens would work.” Familiarity with the visual effects process is an asset in being able to achieve the best result. “I inadvertently come from a lot of visual effect-heavy shoots and shows, so I’m quick to have an opinion about it. Many directors love to reference the way David Fincher uses visual effects because there is such great behind-the-scenes imagery that showcases how they were able to do simple things. Also, I like to shoot tests even on an iPhone to see if this comp will work or if this idea is a good one.” Cinematographer Fabian Wagner and VFX Supervisor John Moffatt spent a lot of time in pre-production for Venom: The Last Dance discussing how to bring out the texture of the symbiote through lighting and camera angles. (Image courtesy of Columbia Pictures) Game of Thrones Director of Photography Fabian Wagner had to make key decisions while prepping and breaking down the script so visual effects had enough time to meet deadline. (Image courtesy of HBO) Twisters was an analog movie with digital effects that worked well because everyone on set doing the technical work understood both formats. (Image courtesy of Universal Pictures) For Cinematographer Larkin Seiple, storytelling is about figuring out where to invest your time and effort. Scene from the Netflix series Beef. (Image courtesy of Netflix) Cinematographer Larkin Seiple believes that all of the advances in technology are a push for greater control, which occurred on Everything Everywhere All at Once. (Image courtesy of A24) Nothing beats reality when it comes to realism. “Every project I do I talk more about the real elements to bring into the shoot than the visual effect element because the more practical stuff that you can do on set, the more it will embed the visual effects into the image, and, therefore, they’re more real,” observes Fabian Wagner, Cinematographer on Venom: The Last Dance. “It also depends on the job you’re doing in terms of how real or unreal you want it to be. Game of Thrones was a good example because it was a visual effects-heavy show, but they were keen on pushing the reality of things as much as possible. We were doing interactive lighting and practical on-set things to embed the visual effects. It was successful.” Television has a significantly compressed schedule compared to feature films. “There are fewer times to iterate. You have to be much more precise. On Game of Thrones, we knew that certain decisions had to be made early on while we were still prepping and breaking down the script. Because of their due dates, to be ready in time, they had to start the visual effects process for certain dragon scenes months before we even started shooting.” “Like everything else, it’s always about communication,” Wagner notes. “I’ve been fortunate to work with extremely talented and collaborative visual effects supervisors, visual effects producers and directors. I have become friends with most of those visual effects departments throughout the shoot, so it’s easy to stay in touch. Even when Venom: The Last Dance was posting, I would be talking to John Moffatt, who was our talented visual effects supervisor. We would exchange emails, text messages or phone calls once a week, and he would send me updates, which we would talk about it. If I gave any notes or thoughts, John would listen, and if it were possible to do anything about, he would. In the end, it’s about those personal relationships, and if you have those, that can go a long way.” Wagner has had to deal with dragons, superheroes and symbiotes. “They’re all the same to me! For the symbiote, we had two previous films to see what they had done, where they had succeeded and where we could improve it slightly. While prepping, John and I spent a lot of time talking about how to bring out the texture of the symbiote and help it with the lighting and camera angles. One of the earliest tests was to see what would happen if we backlit or side lit it as well as trying different textures for reflections. We came up with something we all were happy with, and that’s what we did on set. It was down to trying to speak the same language and aiming for the same thing, which in this case was, ‘How could we make the symbiote look the coolest?’” Visual effects has become a crucial department throughout the filmmaking process. “The relationship with the visual effects supervisor is new,” states Stuart Dryburgh, Cinematographer on Fallout. “We didn’t really have that. On The Piano, the extent of the visual effects was having somebody scribbling in a lightning strike over a stormy sky and a little flash of an animated puppet. Runaway Bride had a two-camera setup where one of the cameras pushed into the frame, and that was digitally removed, but we weren’t using it the way we’re using it now. For [the 2026 Netflix limited series] East of Eden, we’re recreating 19th and early 20th century Connecticut, Boston and Salinas, California in New Zealand. While we have some great sets built and historical buildings that we can use, there is a lot of set extension and modification, and some complete bluescreen scenes, which allow us to more realistically portray a historical environment than we could have done back in the day.” The presence of a visual effects supervisor simplified principal photography. Dryburgh adds, “In many ways, using visual effects frees you to shoot quicker and in places that might otherwise be deemed imperfect because of one little thing, whether it’s power lines or out-of-period buildings or sky. All of those can be easily fixed. Most of us have been doing it for long enough that we have a good idea of what can and can’t be done and how it’s done so that the visual effects supervisor isn’t the arbiter.” Lighting cannot be arbitrarily altered in post as it never looks right. “Whether you set the lighting on the set and the background artist has to match that, or you have an existing background and you, as a DP, have to match that – that is the lighting trick to the whole thing,” Dryburgh observes. “Everything has to be the same, a soft or hard light, the direction and color. Those things all need to line up in a composited shot; that is crucial.” Every director has his or her own approach to filmmaking. “Harold Ramis told me, ‘I’ll deal with the acting and the words. You just make it look nice, alright?’ That’s the conversation we had about shots, and it worked out well. [Director] Garth Davis, who I’m working with now, is a terrific photographer in his own right and has a great visual sense, so he’s much more involved in anything visual, whether it be the designs of the sets, creation of the visual effects, my lighting or choice of lenses. It becomes much more collaborative. And that applies to the visual effects department as well.” Recreating vintage lenses digitally is an important part of the visual aesthetic. “As digital photography has become crisper, better and sharper, people have chosen to use fewer perfect optics, such as lenses that are softer on the edges or give a flare characteristic. Before production, we have the camera department shoot all of these lens grids of different packages and ranges, and visual effects takes that information so they can model every lens. If they’re doing a fully CG background, they can apply that lens characteristic,” remarks Dryburgh. Television schedules for productions like House of the Dragon do not allow a lot of time to iterate, so decisions have to be precise. (Image courtesy of HBO) Bluescreen and stunt doubles on Twisters. (Image courtesy of Universal Pictures) “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world.” —Dan Mindel, Cinematographer, Twisters Cinematographers like Greig Fraser have adopted Unreal Engine. “Greig has an incredible curiosity about new technology, and that helped us specifically with Dune: Part Two,” Villeneuve explains. “Greig was using Unreal Engine to capture natural environments. For example, if we decide to shoot in that specific rocky area, we’ll capture the whole area with drones to recreate the terrain in the computer. If I said, ‘I want to shoot in that valley on November 3rd and have the sun behind the actors. At what time is it? You have to be there at 9:45 am.’ We built the whole schedule like a puzzle to maximize the power of natural light, but that came through those studies, which were made with the software usually used for video games.” Technology is essentially a tool that keeps evolving. Villeneuve adds, “Sometimes, I don’t know if I feel like a dinosaur or if my last movie will be done in this house behind the computer alone. It would be much less tiring to do that, but seriously, the beauty of cinema is the idea of bringing many artists together to create poetry.”
    Like
    Love
    Wow
    Sad
    Angry
    634
    0 Commentaires 0 Parts
  • The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)

    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2.
    With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature.
    Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series?
    Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show.
    Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career. 
    Photograph by Liane Hentscher/HBO
    How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season?
    Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season.
    The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season?
    Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs.
    Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required.

    The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season?
    Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming.
    Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle. 
    What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic?
    Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences.
    Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover.
    Photograph by Liane Hentscher/HBO
    The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did?
    Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs.
    Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence. 

    Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects?
    Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours.
    Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot.
    Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation.
    The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles.
    We had over a hundred shots in episode 2 that required CG Infected horde.
    Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts.

    The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment?
    Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves. 
    The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters?
    Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence.
    During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it!
    When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule.

    Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force. 
    During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain. 

    Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance?
    Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves. 

    Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city?
    Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty.
    Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic?
    Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots.
    Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp
    it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston.
    Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game. 

    The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment?
    Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings.
    Photograph by Liane Hentscher/HBO
    The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects?
    Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal.
    When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement.
    Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth.
    Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint?
    Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season!
    Photograph by Liane Hentscher/HBO
    Looking back on the project, what aspects of the visual effects are you most proud of?
    Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable.
    Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light. 
    How long have you worked on this show?
    Alex Wang // I’ve been on this season for nearly two years.
    Fiona Campbell Westgate // A little over one year; I joined the show in April 2024.
    What’s the VFX shots count?
    Alex Wang // We had just over 2,500 shots this Season.
    Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots.
    What is your next project?
    Fiona Campbell Westgate // Stay tuned…
    A big thanks for your time.
    WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website.
    © Vincent Frei – The Art of VFX – 2025
    #last #season #alex #wang #production
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025 #last #season #alex #wang #production
    WWW.ARTOFVFX.COM
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Mace (DFX Supervisor) led a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025
    Like
    Love
    Wow
    Sad
    Angry
    192
    0 Commentaires 0 Parts
  • ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time

    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey. 
    But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers.

    But wait… there’s more!
    The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film.
    Needless to say, the studio truly became a “Master Builder” on the show.

    The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds. 
    Here's the final trailer:

    In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization.
    Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation?
    Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together. 
    Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell.
    DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with?
    SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmonand Jared Hess. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers.
    KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying. 

    DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film?
    SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld. 
    The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways.
    When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game.
    KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode.

    DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size.
    KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air.
    After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on.
    SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea. 
    KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys, swinging incense. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it.

    DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work. 
    SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version.
    KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.  
    Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have.

    DS: Did Unreal Engine come into play with her? 
    SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes. 
    Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments.
    KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable.

    DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?” 
    KE: Where do you begin? 
    SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question.
    KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.  
    SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of.

    Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
    #minecraft #movie #wētā #helps #adapt
    ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time
    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey.  But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers. But wait… there’s more! The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film. Needless to say, the studio truly became a “Master Builder” on the show. The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds.  Here's the final trailer: In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization. Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation? Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together.  Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell. DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with? SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmonand Jared Hess. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers. KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying.  DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film? SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld.  The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways. When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game. KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode. DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size. KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air. After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on. SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea.  KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys, swinging incense. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it. DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work.  SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version. KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.   Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have. DS: Did Unreal Engine come into play with her?  SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes.  Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments. KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable. DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?”  KE: Where do you begin?  SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question. KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.   SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of. Dan Sarto is Publisher and Editor-in-Chief of Animation World Network. #minecraft #movie #wētā #helps #adapt
    WWW.AWN.COM
    ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time
    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey.  But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers. But wait… there’s more! The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film. Needless to say, the studio truly became a “Master Builder” on the show. The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for $2.5 billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds.  Here's the final trailer: In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization. Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation? Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together.  Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell. DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with? SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmon [Production VFX Supervisor] and Jared Hess [the film’s director]. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers. KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying.  DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film? SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld.  The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways. When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game. KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode. DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size. KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air. After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on. SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea.  KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys [the thurifers], swinging incense [a thurible]. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it. DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work.  SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version. KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.   Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have. DS: Did Unreal Engine come into play with her?  SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes.  Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments. KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable. DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?”  KE: Where do you begin?  SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question. KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.   SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of. Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
    0 Commentaires 0 Parts
  • Cascadeur + QuickMagic | Video Mocap Cleanup and Editing Timelapse

    This is a timelapse of how we made and edited the dancing animation from the reference. We used QuickMagic for the video mocap, then retargeted the animation in Cascadeur, used Animation Unbaking and Fulcrum Motion Cleaning tools to set up a solid base, then used AutoPosing to fix and edit the poses, and finally a touch of AutoPhysics to add secondary motion and bring the whole animation together

    Shoutout to @doridance1389 for the inspiration and amazing content!

    0:00 Stress Intro
    00:18 Import Animation
    00:46 Retargeting
    01:44 Animation Unbaking
    02:41 Editing Timelapse
    09:32 AutoPhysics
    10:36 Timelapse
    11:19 Final Result

    Learn more about Cascadeur license plans:

    Learn how to start using Cascadeur:

    Join our English-speaking community on Discord:


    Follow us on:
    Facebook /
    Twitter

    #Animation #QuickMagic #Tutorial #mocap
    #cascadeur #quickmagic #video #mocap #cleanup
    Cascadeur + QuickMagic | Video Mocap Cleanup and Editing Timelapse
    This is a timelapse of how we made and edited the dancing animation from the reference. We used QuickMagic for the video mocap, then retargeted the animation in Cascadeur, used Animation Unbaking and Fulcrum Motion Cleaning tools to set up a solid base, then used AutoPosing to fix and edit the poses, and finally a touch of AutoPhysics to add secondary motion and bring the whole animation together Shoutout to @doridance1389 for the inspiration and amazing content! 0:00 Stress Intro 00:18 Import Animation 00:46 Retargeting 01:44 Animation Unbaking 02:41 Editing Timelapse 09:32 AutoPhysics 10:36 Timelapse 11:19 Final Result Learn more about Cascadeur license plans: Learn how to start using Cascadeur: Join our English-speaking community on Discord: Follow us on: Facebook / Twitter #Animation #QuickMagic #Tutorial #mocap #cascadeur #quickmagic #video #mocap #cleanup
    WWW.YOUTUBE.COM
    Cascadeur + QuickMagic | Video Mocap Cleanup and Editing Timelapse
    This is a timelapse of how we made and edited the dancing animation from the reference. We used QuickMagic for the video mocap, then retargeted the animation in Cascadeur, used Animation Unbaking and Fulcrum Motion Cleaning tools to set up a solid base, then used AutoPosing to fix and edit the poses, and finally a touch of AutoPhysics to add secondary motion and bring the whole animation together Shoutout to @doridance1389 for the inspiration and amazing content! 0:00 Stress Intro 00:18 Import Animation 00:46 Retargeting 01:44 Animation Unbaking 02:41 Editing Timelapse 09:32 AutoPhysics 10:36 Timelapse 11:19 Final Result Learn more about Cascadeur license plans: https://cascadeur.com/plans Learn how to start using Cascadeur: https://cascadeur.com/learn Join our English-speaking community on Discord: https://discordapp.com/invite/Ymwjhpn Follow us on: Facebook https://www.facebook.com/CascadeurEN/ Twitter https://twitter.com/Cascadeur_soft #Animation #QuickMagic #Tutorial #mocap
    0 Commentaires 0 Parts
  • Pro-Level Mocap Sync: Stream Vicon in iClone for Real-Time Animation

    Reallusion combines Vicon technology with industry-standard timecode support in iClone.
    Reallusion announces its official partnership with Vicon, enabling direct motion capture support for iClone. For the first time, Vicon systems can seamlessly connect with Motion LIVE, offering full integration of real-time, high-fidelity body capture, along with facial and hand mocap, all within one unified platform. Jeff Sheetz, founder of Monkey Chow Animation Studio, shares his experience with this powerful iClone-Vicon suite, which he believes delivers studio-quality results faster than ever before.

    Jeff Scheetz, Motion Capture Orlando
    Since leaving his role as co-founder of The Digital Animation & Visual EffectSchool, Jeff Scheetz, alongside his wife Anne, has been running Monkey Chow Animation Studio in Orlando, Florida. In 2021, they expanded into motion capture with the launch of Motion Capture Orlando, serving the theme park industry. Their work can be seen at Universal Studios and Walt Disney World, and they’ve also produced various motion capture packs for iClone and ActorCore, such as Run for Your Life, Bank Heist, and Avenging Warriors. Jeff’s team also collaborated with Actor Capture on the Netflix movie The Electric State. Key team members include senior mocap technicians Kaszmere Messbarger and Nelson Escobar.

    iClone and Vicon—The New Dynamic Duo
    One of the most exciting aspects of Jeff’s relationship with Reallusion is getting to test out cutting-edge technology before it’s officially released. When he was informed that iClone would soon support Vicon integration and timecode synchronization, it was a game-changer. Motion Capture Orlando has been using Vicon’s cameras and software since they started investing in mocap gear, but they were unable to integrate Vicon data directly into Motion LIVE until now.

    This breakthrough is especially significant for Jeff, as the previous workflow involved streaming Vicon data into Motion Builder for retargeting, and then moving that data into Unreal Engine. The integration of facial data, hand motion, and the inherent latency in this multi-step process presented challenges. Now, with iClone’s Motion LIVE, Vicon streams directly into iClone, allowing for seamless integration of face, body, and hand all within the platform. This streamlined approach reduces complexity, making it easier to execute live shows and set up production workflows.

    Record, Edit, and Render Previz in iClone
    When it comes to previz, speed and cost-effectiveness are essential. Traditional mocap workflows can be time-consuming due to the necessary exports and imports between different software tools. However, by recording directly into iClone, you can immediately start blocking shots, working out camera moves, and rendering previews. Jeff’s team recorded a few assassin chase scenes on their stage, using Motion LIVE to capture data, and the resulting iProject files served as the foundation for their productions.

    The quality of the capture was impressive due to starting with Vicon data. This allowed them to present high-quality previz to a director quickly. The renders included vital data on the edges, such as version info, lens used, and notes for feedback, helping the team stay organized throughout the process.

    From Previz to Polished Video: Cleaning and Syncing with Timecode
    After capturing mocap for the previz, the team performed proper data cleaning using Shogun Post, and motion edits were carried out in iClone. They identified the best takes—approximately 20% of what was shot—and cleaned only those pieces. One major challenge was syncing the cleaned data, but the introduction of timecode has made this process much easier.

    Using the new Timecode Plugin, Jeff could instantly sync the cleaned motion data with the original timeline. By dropping the new motiononto their character and selecting “Align Clip to Embedded Timecode”, iClone automatically aligned it with the original recording. This feature works not only for motion data but also for facial capture, hand data, audio, and video.

    Seamless Mocap Sync with Live Action
    This seamless integration was especially useful when creating a sitcom scene for Life with Bob! in which a live actress performed against a CGI character. Mocap actors, placed just out of frame, could improvise and interact with the CGI character.

    The use of a Tentacle Sync device transmitted timecode across all systems: Vicon, the Live Face app, the video camera, and sound recording devices. This made it incredibly easy to sync everything once the master shot was completed.
    Reshooting and Updating with Motion LIVE
    One of Jeff’s favorite features of this new workflow is the ability to quickly make adjustments, such as reshooting facial animation without the need for a full reshoot. Using his iPhone, he could capture just the parts that needed improvement, like the lips, while leaving other aspects of the face intact. The flexibility to isolate and reshoot specific elements is a major time-saver and enhances the overall efficiency of the production process.

    The New Workflow: Speed, Efficiency, and Creativity
    For creators using Reallusion products, the integration of Vicon with iClone opens up new possibilities for creating high-quality animated content in a fraction of the time previously required. For a more detailed insight into the workflow, please find the full article here. The new workflow allows for fast, professional results with minimal overhead. With the addition of the Timecode plugin, keeping everything aligned and organized is simple. Motion LIVE, combined with Vicon’s mocap system, empowers creators to focus on what matters most: creativity and storytelling. And after all, that’s why many of us got into animation in the first place.

    Brought to you by Reallusion:
    This article is part of the befores & afters VFX Insight series. If you’d like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.
    The post Pro-Level Mocap Sync: Stream Vicon in iClone for Real-Time Animation appeared first on befores & afters.
    #prolevel #mocap #sync #stream #vicon
    Pro-Level Mocap Sync: Stream Vicon in iClone for Real-Time Animation
    Reallusion combines Vicon technology with industry-standard timecode support in iClone. Reallusion announces its official partnership with Vicon, enabling direct motion capture support for iClone. For the first time, Vicon systems can seamlessly connect with Motion LIVE, offering full integration of real-time, high-fidelity body capture, along with facial and hand mocap, all within one unified platform. Jeff Sheetz, founder of Monkey Chow Animation Studio, shares his experience with this powerful iClone-Vicon suite, which he believes delivers studio-quality results faster than ever before. Jeff Scheetz, Motion Capture Orlando Since leaving his role as co-founder of The Digital Animation & Visual EffectSchool, Jeff Scheetz, alongside his wife Anne, has been running Monkey Chow Animation Studio in Orlando, Florida. In 2021, they expanded into motion capture with the launch of Motion Capture Orlando, serving the theme park industry. Their work can be seen at Universal Studios and Walt Disney World, and they’ve also produced various motion capture packs for iClone and ActorCore, such as Run for Your Life, Bank Heist, and Avenging Warriors. Jeff’s team also collaborated with Actor Capture on the Netflix movie The Electric State. Key team members include senior mocap technicians Kaszmere Messbarger and Nelson Escobar. iClone and Vicon—The New Dynamic Duo One of the most exciting aspects of Jeff’s relationship with Reallusion is getting to test out cutting-edge technology before it’s officially released. When he was informed that iClone would soon support Vicon integration and timecode synchronization, it was a game-changer. Motion Capture Orlando has been using Vicon’s cameras and software since they started investing in mocap gear, but they were unable to integrate Vicon data directly into Motion LIVE until now. This breakthrough is especially significant for Jeff, as the previous workflow involved streaming Vicon data into Motion Builder for retargeting, and then moving that data into Unreal Engine. The integration of facial data, hand motion, and the inherent latency in this multi-step process presented challenges. Now, with iClone’s Motion LIVE, Vicon streams directly into iClone, allowing for seamless integration of face, body, and hand all within the platform. This streamlined approach reduces complexity, making it easier to execute live shows and set up production workflows. Record, Edit, and Render Previz in iClone When it comes to previz, speed and cost-effectiveness are essential. Traditional mocap workflows can be time-consuming due to the necessary exports and imports between different software tools. However, by recording directly into iClone, you can immediately start blocking shots, working out camera moves, and rendering previews. Jeff’s team recorded a few assassin chase scenes on their stage, using Motion LIVE to capture data, and the resulting iProject files served as the foundation for their productions. The quality of the capture was impressive due to starting with Vicon data. This allowed them to present high-quality previz to a director quickly. The renders included vital data on the edges, such as version info, lens used, and notes for feedback, helping the team stay organized throughout the process. From Previz to Polished Video: Cleaning and Syncing with Timecode After capturing mocap for the previz, the team performed proper data cleaning using Shogun Post, and motion edits were carried out in iClone. They identified the best takes—approximately 20% of what was shot—and cleaned only those pieces. One major challenge was syncing the cleaned data, but the introduction of timecode has made this process much easier. Using the new Timecode Plugin, Jeff could instantly sync the cleaned motion data with the original timeline. By dropping the new motiononto their character and selecting “Align Clip to Embedded Timecode”, iClone automatically aligned it with the original recording. This feature works not only for motion data but also for facial capture, hand data, audio, and video. Seamless Mocap Sync with Live Action This seamless integration was especially useful when creating a sitcom scene for Life with Bob! in which a live actress performed against a CGI character. Mocap actors, placed just out of frame, could improvise and interact with the CGI character. The use of a Tentacle Sync device transmitted timecode across all systems: Vicon, the Live Face app, the video camera, and sound recording devices. This made it incredibly easy to sync everything once the master shot was completed. Reshooting and Updating with Motion LIVE One of Jeff’s favorite features of this new workflow is the ability to quickly make adjustments, such as reshooting facial animation without the need for a full reshoot. Using his iPhone, he could capture just the parts that needed improvement, like the lips, while leaving other aspects of the face intact. The flexibility to isolate and reshoot specific elements is a major time-saver and enhances the overall efficiency of the production process. The New Workflow: Speed, Efficiency, and Creativity For creators using Reallusion products, the integration of Vicon with iClone opens up new possibilities for creating high-quality animated content in a fraction of the time previously required. For a more detailed insight into the workflow, please find the full article here. The new workflow allows for fast, professional results with minimal overhead. With the addition of the Timecode plugin, keeping everything aligned and organized is simple. Motion LIVE, combined with Vicon’s mocap system, empowers creators to focus on what matters most: creativity and storytelling. And after all, that’s why many of us got into animation in the first place. Brought to you by Reallusion: This article is part of the befores & afters VFX Insight series. If you’d like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here. The post Pro-Level Mocap Sync: Stream Vicon in iClone for Real-Time Animation appeared first on befores & afters. #prolevel #mocap #sync #stream #vicon
    BEFORESANDAFTERS.COM
    Pro-Level Mocap Sync: Stream Vicon in iClone for Real-Time Animation
    Reallusion combines Vicon technology with industry-standard timecode support in iClone. Reallusion announces its official partnership with Vicon, enabling direct motion capture support for iClone. For the first time, Vicon systems can seamlessly connect with Motion LIVE, offering full integration of real-time, high-fidelity body capture, along with facial and hand mocap, all within one unified platform. Jeff Sheetz, founder of Monkey Chow Animation Studio, shares his experience with this powerful iClone-Vicon suite, which he believes delivers studio-quality results faster than ever before. Jeff Scheetz, Motion Capture Orlando Since leaving his role as co-founder of The Digital Animation & Visual Effect (DAVE) School, Jeff Scheetz, alongside his wife Anne, has been running Monkey Chow Animation Studio in Orlando, Florida. In 2021, they expanded into motion capture with the launch of Motion Capture Orlando, serving the theme park industry. Their work can be seen at Universal Studios and Walt Disney World, and they’ve also produced various motion capture packs for iClone and ActorCore, such as Run for Your Life, Bank Heist, and Avenging Warriors. Jeff’s team also collaborated with Actor Capture on the Netflix movie The Electric State. Key team members include senior mocap technicians Kaszmere Messbarger and Nelson Escobar. iClone and Vicon—The New Dynamic Duo One of the most exciting aspects of Jeff’s relationship with Reallusion is getting to test out cutting-edge technology before it’s officially released. When he was informed that iClone would soon support Vicon integration and timecode synchronization, it was a game-changer. Motion Capture Orlando has been using Vicon’s cameras and software since they started investing in mocap gear, but they were unable to integrate Vicon data directly into Motion LIVE until now. This breakthrough is especially significant for Jeff, as the previous workflow involved streaming Vicon data into Motion Builder for retargeting, and then moving that data into Unreal Engine. The integration of facial data, hand motion (via data gloves), and the inherent latency in this multi-step process presented challenges. Now, with iClone’s Motion LIVE, Vicon streams directly into iClone, allowing for seamless integration of face, body, and hand all within the platform. This streamlined approach reduces complexity, making it easier to execute live shows and set up production workflows. Record, Edit, and Render Previz in iClone When it comes to previz, speed and cost-effectiveness are essential. Traditional mocap workflows can be time-consuming due to the necessary exports and imports between different software tools. However, by recording directly into iClone, you can immediately start blocking shots, working out camera moves, and rendering previews. Jeff’s team recorded a few assassin chase scenes on their stage, using Motion LIVE to capture data, and the resulting iProject files served as the foundation for their productions. The quality of the capture was impressive due to starting with Vicon data. This allowed them to present high-quality previz to a director quickly. The renders included vital data on the edges, such as version info, lens used, and notes for feedback, helping the team stay organized throughout the process. From Previz to Polished Video: Cleaning and Syncing with Timecode After capturing mocap for the previz, the team performed proper data cleaning using Shogun Post (Vicon’s mocap cleaning software), and motion edits were carried out in iClone. They identified the best takes—approximately 20% of what was shot—and cleaned only those pieces. One major challenge was syncing the cleaned data, but the introduction of timecode has made this process much easier. Using the new Timecode Plugin, Jeff could instantly sync the cleaned motion data with the original timeline. By dropping the new motion (as an FBX) onto their character and selecting “Align Clip to Embedded Timecode”, iClone automatically aligned it with the original recording. This feature works not only for motion data but also for facial capture, hand data, audio, and video. Seamless Mocap Sync with Live Action This seamless integration was especially useful when creating a sitcom scene for Life with Bob! in which a live actress performed against a CGI character. Mocap actors, placed just out of frame, could improvise and interact with the CGI character. The use of a Tentacle Sync device transmitted timecode across all systems: Vicon, the Live Face app, the video camera, and sound recording devices. This made it incredibly easy to sync everything once the master shot was completed. Reshooting and Updating with Motion LIVE One of Jeff’s favorite features of this new workflow is the ability to quickly make adjustments, such as reshooting facial animation without the need for a full reshoot. Using his iPhone, he could capture just the parts that needed improvement, like the lips, while leaving other aspects of the face intact. The flexibility to isolate and reshoot specific elements is a major time-saver and enhances the overall efficiency of the production process. The New Workflow: Speed, Efficiency, and Creativity For creators using Reallusion products, the integration of Vicon with iClone opens up new possibilities for creating high-quality animated content in a fraction of the time previously required. For a more detailed insight into the workflow, please find the full article here. The new workflow allows for fast, professional results with minimal overhead. With the addition of the Timecode plugin, keeping everything aligned and organized is simple. Motion LIVE, combined with Vicon’s mocap system, empowers creators to focus on what matters most: creativity and storytelling. And after all, that’s why many of us got into animation in the first place. Brought to you by Reallusion: This article is part of the befores & afters VFX Insight series. If you’d like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here. The post Pro-Level Mocap Sync: Stream Vicon in iClone for Real-Time Animation appeared first on befores & afters.
    0 Commentaires 0 Parts
  • Digital Domain Goes Retro-Futuristic with Robots on ‘The Electric State’ VFX

    In The Electric State, based on a graphic novel by Swedish artist Simon Stålenhag, after a robot uprising in an alternative version of the 1990s, an orphaned teenager goes on a quest across the American West, with a cartoon-inspired robot, a smuggler, and his sidekick, to find her long-lost brother. Adapting this sci-fi adventure for Netflix were Joe and Anthony Russo; their film stars Millie Bobbie Brown, Chris Pratt, Stanley Tucci, Giancarlo Esposito and a cast of CG automatons voiced by the likes of Woody Harrelson, Alan Tudyk, Hank Azaria, and Anthony Mackie.  Overseeing the visual effects, which surpassed what the Russos had to deal with during their halcyon MCU days, was Matthew Buttler, who turned to the venerable Digital Domain.
    As the main vendor, the studio was responsible for producing 61 character builds, 480 assets, and over 850 shots. “It was one of the biggest projects that I’ve done in terms of sheer volumes of assets, shots and characters,” states Joel Behrens, VFX Supervisor, Digital Domain.  “Our wonderful asset team did the 61 characters we were responsible for and had to ingest another 46 characters from other facilities.  We didn’t do any major changes. It was pushing our pipeline to the limits it could handle, especially with other shows going on. We took up a lot of disk space and had the ability to expand and contract the Renderfarm with cloud machines as well.”
    In researching for the show, Digital Domain visited Boston Dynamics to better understand the technological advancements in robotics, and what structures, motions, and interactions were logical and physically plausible.  “There is a certain amount of fake engineering that goes into some of these things,” notes Behrens.  “We’re not actually building these robots to legitimately function in the real world but have to be visibly believable that they can actually pull some of this stuff off.”  The starting point is always the reference material provided by the client.  “Is there a voice that I need to match to?” notes Liz Bernard, Animation Supervisor, Digital Domain.  “Is there any physical body reference either from motion reference actors in the plate or motion capture? We had a big mix of that on the show.  Some of our characters couldn’t be mocapped at all while others could but we had to modify the performance considerably.  We were also looking at the anatomy of each one of these robots to see what their physical capabilities are.  Can they run or jump?  Because that’s always going to tie tightly with the personality.  Your body in some ways is your personality.  We’re trying to figure out how do we put the actor’s voice on top of all these physical limitations in a way that feels cohesive.  It doesn’t happen overnight.” 

    The character design of Cosmo was retained from the graphic novel despite not being feasible to engineer in reality.  “His feet are huge,” laughs Bernard.  “We had to figure out how to get him to walk in a way that felt normal and put the joints in the right spots.” Emoting was mainly achieved through physicality.  “He does have these audio clips from the Kid Cosmo cartoon that he can use to help express himself verbally, but most of it is pantomime,” observes Bernard.  “There is this great scene between Cosmo and Michelle that occurs right after she crashes the car, and Cosmo is still trying to convince her who he is and why she should go off on this great search for her brother across the country.   We were trying to get some tough nuanced acting into these shots with a subtle head tilt or a little bit of a slump in the shoulders.”  A green light was inserted into the eyes.  “Matthew Butler likes robotic stuff and anything that we could do to make Cosmo feel more grounded in reality was helpful,” observes Behrens.  “We also wanted to prevent anyone from panicking and giving Cosmo a more animated face or allowing him to speak dialogue. We started off with a constant light at the beginning and then added this twinkle and glimmer in his eye during certain moments. We liked that and ended up putting it in more places throughout the film. Everybody says that the eyes are the windows to the soul so giving Cosmo something rather than a dark black painted spot on his face assisted in connecting with that character.” 

    Coming in four different sizes that fit inside one another - like a Russian doll - is Herman. Digital Domain looked after the eight-inch, four-foot and 20-foot versions while ILM was responsible for the 60-foot Herman that appears in the final battle.   “They were scaled up to a certain extent but consider that the joints on the 20-foot version of Herman versus the four-foot version need to be more robust and beefier because they’re carrying so much more weight,” remarks Bernard.  “We were focusing on making sure that the impact of each step rippled through the body in a way that made it clear how heavy a 20-foot robot carrying a van across a desert would be.  The smaller one can be nimbler and lighter on its feet.  There were similar physical limitations, but that weight was the big deal.”  Incorporated into the face of Herman is a retro-futuristic screen in the style of the 1980s and early 1990s CRT panels. “It has these RGB pixels that live under a thick plate of glass like your old television set,” explains Behrens.  “You have this beautiful reflective dome that goes over top of these cathode-ray-looking pixels that allowed us to treat it as a modern-day LED with the ability to animate his expressions, or if we wanted to, put symbols up. You could pixelized any graphical element and put it on Herman’s face.  We wanted to add a nonlinear decay into the pixels so when he changed expressions or a shape altered drastically you would have a slow quadratic decay of the pixels fading off as he switched expressions. That contributed a nice touch.”

    One member of the robot cast is an iconic Planters mascot.  “Everybody knows who Mr. Peanut is and what he looks like, at least in North America,” observes Behrens.  “We had to go through a lot of design iterations of how his face should animate. It was determined that as a slightly older model of robot he didn’t have a lot of dexterity in his face. We were modelling him after Chuck E. Cheese and ShowBiz Pizza animatronics, so it was like a latex shell over the top of a mechanical under structure that drove his limited expressions. It allowed him to open and close his mouth and do some slight contractions at the corners, leaving most of the acting to his eyes, which did not have as many restrictions. The eyes had the ability to move quickly, and dart and blink like a human.”  The eyebrows were mounted tracks that ran up and down a vertical slot on the front of the face.  “We could move the eyebrows up and down, and tilt them, but couldn’t do anything else,” states Bernard.  “It was trying to find a visual language that would get the acting across with Woody Harrelson’s amazing performance backing it up.  Then a lot of pantomime to go with that.”  Mr. Peanut moves in a jerky rather than smooth manner.  “Here is a funny little detail,” reveals Bernard.  “If you think about a peanut shell, he doesn’t have a chest or hips that can move independently.  We realized early on that in order to get him to walk without teeter-tottering everywhere, we were going to have to cut his butt off, reattach it and add a swivel control on the bottom.  We always kept that peanut silhouette intact; however, he could swivel his hips enough to walk forward without looking silly!” 

    Other notable robots are Pop Fly and Perplexo; the former is modelled on baseball player, the latter on a magician.  “We decided that Pop Fly would be the clunkiest of all robots because he was meant to be the elder statesman,” states Behrens.  “Pop Fly was partially falling apart, like his eye would drift, the mouth would hang open and sometimes he’d pass out for a second and wake back up.  Pop Fly was the scavenger hunter of the group who has seen stuff in the battles of the wasteland. We came up with a fun pitching mechanism so he could actually shoot the balls out of his mouth and of course, there was his trusty baseball bat that he could bat things with.” An interesting task was figuring out how to rig his model.  “We realized that there needed to be a lot of restrictions in his joints to make him look realistic based on how he was modelled in the first place,” notes Bernard.  “Pop Fly couldn’t rotate his head in every direction; he could turn it from side to side for the most part.  Pop Fly was on this weird structure with the four wheels on a scissor lift situation which meant that he always had to lean forward to get going and when stopping, would rock backwards.  It was fun to add all that detail in for him.”  Serving as Perplexo’s upper body is a theatrical box that he pops in and out of.  “Perplexo did not have a whole lot going on with his face,” remarks Bernard.  “It was a simple mechanical structure to his jaw, eyes, and eyelids; that meant we could push the performance with pantomime and crazy big gestures with the arms.”              
    A major adversary in the film is The Marshall, portrayed by Giancarlo Esposito, who remotely controls a drone that projects the face of operator onto a video screen.  “We started with a much smaller screen and had a cowboy motif for awhile, but then they decided to have a unifying design for the drones that are operated by humans versus the robots,” remarks Behrens.  “Since the artist Simon Stålenhag had done an interesting, cool design with the virtual reality helmets with that long duckbill that the humans wear in the real world, the decision was made to mimic that head style of the drones to match the drone operators. Then you could put a screen on the front; that’s how you see Tedor The Marshall or the commando operators. It worked out quite nicely.”  

    There was not much differentiation in the movement of the drones.  “The drones were meant to be in the vein of Stormtroopers, a horde of them being operated by people sitting in a comfortable room in Seattle,” observes Bernard. “So, they didn’t get as much effort and love as we put into the rest of the robots which had their own personalities. But for The Marshall, we have great mocap to start from Adam Croasdell. He played it a little bit cowboy, which was how Giancarlo Esposito was portraying the character as well, like a Western sheriff style vibe. You could hear that in the voice.  Listening to Giancarlo’s vocal performance gives you a lot of clues of what you should do when you’re moving that character around.  We put all of that together in the performance of The Marshall.”  
    Many environments had to either be created or augmented, such as the haunted amusement park known as Happyland. “The majority of the exterior of Happyland was a beautiful set that Dennis Gassner and his crew built in a parking lot of a waterslide park in Atlanta,” states Behrens.  “We would go there at night and freeze our butts off shooting for a good two and a half weeks in the cold Atlanta winter.  Most of our environmental work was doing distance extensions for that and adding atmospherics and fog.  We made all the scavenger robots that inhabit Happyland, which are cannibalistic robotics that upgrade and hot rod themselves from random parts taken from the robots that they kill.  Once we get into the haunted house and fall into the basement, that’s where Dr. Amherst has his lab, which was modelled off a 1930s Frankenstein set, with Tesla coils, beakers, and lab equipment.  That was initially a set build we did onstage in Atlanta. But when we got into additional photography, they wanted to do this whole choreographed fight with The Marshall and Mr. Peanut. Because they didn’t know what actions we would need, we ended up building that entire lower level in CG.”  

    At one point, all the exiled robots gather at the Mall within the Exclusion Zone.  “We were responsible for building a number of the background characters along with Storm Studios and ILM,” remarks Behrens.  “As for the mall, we didn’t have to do much to the environment.  There were some small things here and there that had to be modified.  We took over an abandoned mall in Atlanta and the art department dressed over half of it.” The background characters were not treated haphazardly. “We assigned two or three characters to each animator,” explains Bernard.  “I asked them to make a backstory and figure out who this guy is, what does he care about, and who is his mama?!  Put that into the performance so that each one feels unique and different because they have their own personalities.  There is a big central theme in the movie where the robots are almost more human than most of the humans you meet.  It was important to us that we put that humanity into their performances. As far as the Mall and choreography, Matthew, Joel and I knew that was going to be a huge challenge because this is not traditional crowd work where you can animate cycles and give it to a crowds department and say, ‘Have a bunch of people walking around.’  All these characters are different; they have to move differently and do their own thing.  We did a first pass on the big reveal in the Mall where you swing around and see the atrium where everybody is doing their thing.  We essentially took each character and moved them around like a chess piece to figure out if we had enough characters, if the color balanced nicely across all of them, and if it was okay for us to duplicate a couple of them.  We started to show that early to Matthew and Jeffrey Ford, and the directors to get buyoff on the density of the crowd.”   
    Considered one of the film’s signature sequences is the walk across the Exclusion Zone, where 20-foot Herman is carrying a Volkswagen van containing Michelle, Cosmo and Keats on his shoulder.  “We did a little bit of everything,” notes Behrens.  “We had plate-based shots because a splinter unit went out to Moab, Utah and shot a bunch of beautiful vistas for us.  For environments, there were shots where we had to do projections of plate material onto 3D geometry that we built. We had some DMPs that went into deep background. We also had to build out some actual legitimate 3D terrain for foreground and midground because a lot of the shots that had interaction with our hero characters rocking and back forth were shot on a bluescreen stage with a VW van on a large gimbal rig.  Then Liz had the fun job of trying to tie that into a giant robot walking with them.  We had to do some obvious tweaking to some of those motions. The establishing shots, where they are walking through this giant dead robot skeleton from who knows where, several of those were 100 percent CG. Once they get to the Mall, we had a big digital mall and a canyon area that had to look like they were once populated.”  Modifications were kept subtle.  “There were a couple of shots where we needed to move the plate VW van around a little bit,” states Bernard.  “You can’t do a lot without it starting to fall apart and lose perspective.” 

    “The biggest challenge was the scale and sheer number of characters needed that played a large role interacting with our human actors and creating a believable world for them to live in,” reflects Behrens.  “The sequence that I had the most fun with was the mine sequence with Herman and Keats, as far as their banter back and forth. Some of our most expansive work was the Mall and the walk across the Exclusion Zone.  Those had the most stunning visuals.”  Bernard agrees with her colleague.  “I’m going to sound like a broken record.  For me, it was the scale and the sheer number of characters that we had to deal with and keeping them feeling that they were all different, but from the same universe.  Having the animators working towards that same goal was a big challenge.  We had quite a large team on this one.  And I do love that mine sequence.  There is such good banter between Keats and Herman, especially early on in that sequence.  It has so much great action to it.  We got to drop a giant claw on top of The Marshall that he had to fight his way out of.  That was a hard shot.  And of course, the Mall is stunning.  You can see all the care that went into creating that environment and all those characters.  It’s beautiful.”     

    Trevor Hogg is a freelance video editor and writer best known for composing in-depth filmmaker and movie profiles for VFX Voice, Animation Magazine, and British Cinematographer.
    #digital #domain #goes #retrofuturistic #with
    Digital Domain Goes Retro-Futuristic with Robots on ‘The Electric State’ VFX
    In The Electric State, based on a graphic novel by Swedish artist Simon Stålenhag, after a robot uprising in an alternative version of the 1990s, an orphaned teenager goes on a quest across the American West, with a cartoon-inspired robot, a smuggler, and his sidekick, to find her long-lost brother. Adapting this sci-fi adventure for Netflix were Joe and Anthony Russo; their film stars Millie Bobbie Brown, Chris Pratt, Stanley Tucci, Giancarlo Esposito and a cast of CG automatons voiced by the likes of Woody Harrelson, Alan Tudyk, Hank Azaria, and Anthony Mackie.  Overseeing the visual effects, which surpassed what the Russos had to deal with during their halcyon MCU days, was Matthew Buttler, who turned to the venerable Digital Domain. As the main vendor, the studio was responsible for producing 61 character builds, 480 assets, and over 850 shots. “It was one of the biggest projects that I’ve done in terms of sheer volumes of assets, shots and characters,” states Joel Behrens, VFX Supervisor, Digital Domain.  “Our wonderful asset team did the 61 characters we were responsible for and had to ingest another 46 characters from other facilities.  We didn’t do any major changes. It was pushing our pipeline to the limits it could handle, especially with other shows going on. We took up a lot of disk space and had the ability to expand and contract the Renderfarm with cloud machines as well.” In researching for the show, Digital Domain visited Boston Dynamics to better understand the technological advancements in robotics, and what structures, motions, and interactions were logical and physically plausible.  “There is a certain amount of fake engineering that goes into some of these things,” notes Behrens.  “We’re not actually building these robots to legitimately function in the real world but have to be visibly believable that they can actually pull some of this stuff off.”  The starting point is always the reference material provided by the client.  “Is there a voice that I need to match to?” notes Liz Bernard, Animation Supervisor, Digital Domain.  “Is there any physical body reference either from motion reference actors in the plate or motion capture? We had a big mix of that on the show.  Some of our characters couldn’t be mocapped at all while others could but we had to modify the performance considerably.  We were also looking at the anatomy of each one of these robots to see what their physical capabilities are.  Can they run or jump?  Because that’s always going to tie tightly with the personality.  Your body in some ways is your personality.  We’re trying to figure out how do we put the actor’s voice on top of all these physical limitations in a way that feels cohesive.  It doesn’t happen overnight.”  The character design of Cosmo was retained from the graphic novel despite not being feasible to engineer in reality.  “His feet are huge,” laughs Bernard.  “We had to figure out how to get him to walk in a way that felt normal and put the joints in the right spots.” Emoting was mainly achieved through physicality.  “He does have these audio clips from the Kid Cosmo cartoon that he can use to help express himself verbally, but most of it is pantomime,” observes Bernard.  “There is this great scene between Cosmo and Michelle that occurs right after she crashes the car, and Cosmo is still trying to convince her who he is and why she should go off on this great search for her brother across the country.   We were trying to get some tough nuanced acting into these shots with a subtle head tilt or a little bit of a slump in the shoulders.”  A green light was inserted into the eyes.  “Matthew Butler likes robotic stuff and anything that we could do to make Cosmo feel more grounded in reality was helpful,” observes Behrens.  “We also wanted to prevent anyone from panicking and giving Cosmo a more animated face or allowing him to speak dialogue. We started off with a constant light at the beginning and then added this twinkle and glimmer in his eye during certain moments. We liked that and ended up putting it in more places throughout the film. Everybody says that the eyes are the windows to the soul so giving Cosmo something rather than a dark black painted spot on his face assisted in connecting with that character.”  Coming in four different sizes that fit inside one another - like a Russian doll - is Herman. Digital Domain looked after the eight-inch, four-foot and 20-foot versions while ILM was responsible for the 60-foot Herman that appears in the final battle.   “They were scaled up to a certain extent but consider that the joints on the 20-foot version of Herman versus the four-foot version need to be more robust and beefier because they’re carrying so much more weight,” remarks Bernard.  “We were focusing on making sure that the impact of each step rippled through the body in a way that made it clear how heavy a 20-foot robot carrying a van across a desert would be.  The smaller one can be nimbler and lighter on its feet.  There were similar physical limitations, but that weight was the big deal.”  Incorporated into the face of Herman is a retro-futuristic screen in the style of the 1980s and early 1990s CRT panels. “It has these RGB pixels that live under a thick plate of glass like your old television set,” explains Behrens.  “You have this beautiful reflective dome that goes over top of these cathode-ray-looking pixels that allowed us to treat it as a modern-day LED with the ability to animate his expressions, or if we wanted to, put symbols up. You could pixelized any graphical element and put it on Herman’s face.  We wanted to add a nonlinear decay into the pixels so when he changed expressions or a shape altered drastically you would have a slow quadratic decay of the pixels fading off as he switched expressions. That contributed a nice touch.” One member of the robot cast is an iconic Planters mascot.  “Everybody knows who Mr. Peanut is and what he looks like, at least in North America,” observes Behrens.  “We had to go through a lot of design iterations of how his face should animate. It was determined that as a slightly older model of robot he didn’t have a lot of dexterity in his face. We were modelling him after Chuck E. Cheese and ShowBiz Pizza animatronics, so it was like a latex shell over the top of a mechanical under structure that drove his limited expressions. It allowed him to open and close his mouth and do some slight contractions at the corners, leaving most of the acting to his eyes, which did not have as many restrictions. The eyes had the ability to move quickly, and dart and blink like a human.”  The eyebrows were mounted tracks that ran up and down a vertical slot on the front of the face.  “We could move the eyebrows up and down, and tilt them, but couldn’t do anything else,” states Bernard.  “It was trying to find a visual language that would get the acting across with Woody Harrelson’s amazing performance backing it up.  Then a lot of pantomime to go with that.”  Mr. Peanut moves in a jerky rather than smooth manner.  “Here is a funny little detail,” reveals Bernard.  “If you think about a peanut shell, he doesn’t have a chest or hips that can move independently.  We realized early on that in order to get him to walk without teeter-tottering everywhere, we were going to have to cut his butt off, reattach it and add a swivel control on the bottom.  We always kept that peanut silhouette intact; however, he could swivel his hips enough to walk forward without looking silly!”  Other notable robots are Pop Fly and Perplexo; the former is modelled on baseball player, the latter on a magician.  “We decided that Pop Fly would be the clunkiest of all robots because he was meant to be the elder statesman,” states Behrens.  “Pop Fly was partially falling apart, like his eye would drift, the mouth would hang open and sometimes he’d pass out for a second and wake back up.  Pop Fly was the scavenger hunter of the group who has seen stuff in the battles of the wasteland. We came up with a fun pitching mechanism so he could actually shoot the balls out of his mouth and of course, there was his trusty baseball bat that he could bat things with.” An interesting task was figuring out how to rig his model.  “We realized that there needed to be a lot of restrictions in his joints to make him look realistic based on how he was modelled in the first place,” notes Bernard.  “Pop Fly couldn’t rotate his head in every direction; he could turn it from side to side for the most part.  Pop Fly was on this weird structure with the four wheels on a scissor lift situation which meant that he always had to lean forward to get going and when stopping, would rock backwards.  It was fun to add all that detail in for him.”  Serving as Perplexo’s upper body is a theatrical box that he pops in and out of.  “Perplexo did not have a whole lot going on with his face,” remarks Bernard.  “It was a simple mechanical structure to his jaw, eyes, and eyelids; that meant we could push the performance with pantomime and crazy big gestures with the arms.”               A major adversary in the film is The Marshall, portrayed by Giancarlo Esposito, who remotely controls a drone that projects the face of operator onto a video screen.  “We started with a much smaller screen and had a cowboy motif for awhile, but then they decided to have a unifying design for the drones that are operated by humans versus the robots,” remarks Behrens.  “Since the artist Simon Stålenhag had done an interesting, cool design with the virtual reality helmets with that long duckbill that the humans wear in the real world, the decision was made to mimic that head style of the drones to match the drone operators. Then you could put a screen on the front; that’s how you see Tedor The Marshall or the commando operators. It worked out quite nicely.”   There was not much differentiation in the movement of the drones.  “The drones were meant to be in the vein of Stormtroopers, a horde of them being operated by people sitting in a comfortable room in Seattle,” observes Bernard. “So, they didn’t get as much effort and love as we put into the rest of the robots which had their own personalities. But for The Marshall, we have great mocap to start from Adam Croasdell. He played it a little bit cowboy, which was how Giancarlo Esposito was portraying the character as well, like a Western sheriff style vibe. You could hear that in the voice.  Listening to Giancarlo’s vocal performance gives you a lot of clues of what you should do when you’re moving that character around.  We put all of that together in the performance of The Marshall.”   Many environments had to either be created or augmented, such as the haunted amusement park known as Happyland. “The majority of the exterior of Happyland was a beautiful set that Dennis Gassner and his crew built in a parking lot of a waterslide park in Atlanta,” states Behrens.  “We would go there at night and freeze our butts off shooting for a good two and a half weeks in the cold Atlanta winter.  Most of our environmental work was doing distance extensions for that and adding atmospherics and fog.  We made all the scavenger robots that inhabit Happyland, which are cannibalistic robotics that upgrade and hot rod themselves from random parts taken from the robots that they kill.  Once we get into the haunted house and fall into the basement, that’s where Dr. Amherst has his lab, which was modelled off a 1930s Frankenstein set, with Tesla coils, beakers, and lab equipment.  That was initially a set build we did onstage in Atlanta. But when we got into additional photography, they wanted to do this whole choreographed fight with The Marshall and Mr. Peanut. Because they didn’t know what actions we would need, we ended up building that entire lower level in CG.”   At one point, all the exiled robots gather at the Mall within the Exclusion Zone.  “We were responsible for building a number of the background characters along with Storm Studios and ILM,” remarks Behrens.  “As for the mall, we didn’t have to do much to the environment.  There were some small things here and there that had to be modified.  We took over an abandoned mall in Atlanta and the art department dressed over half of it.” The background characters were not treated haphazardly. “We assigned two or three characters to each animator,” explains Bernard.  “I asked them to make a backstory and figure out who this guy is, what does he care about, and who is his mama?!  Put that into the performance so that each one feels unique and different because they have their own personalities.  There is a big central theme in the movie where the robots are almost more human than most of the humans you meet.  It was important to us that we put that humanity into their performances. As far as the Mall and choreography, Matthew, Joel and I knew that was going to be a huge challenge because this is not traditional crowd work where you can animate cycles and give it to a crowds department and say, ‘Have a bunch of people walking around.’  All these characters are different; they have to move differently and do their own thing.  We did a first pass on the big reveal in the Mall where you swing around and see the atrium where everybody is doing their thing.  We essentially took each character and moved them around like a chess piece to figure out if we had enough characters, if the color balanced nicely across all of them, and if it was okay for us to duplicate a couple of them.  We started to show that early to Matthew and Jeffrey Ford, and the directors to get buyoff on the density of the crowd.”    Considered one of the film’s signature sequences is the walk across the Exclusion Zone, where 20-foot Herman is carrying a Volkswagen van containing Michelle, Cosmo and Keats on his shoulder.  “We did a little bit of everything,” notes Behrens.  “We had plate-based shots because a splinter unit went out to Moab, Utah and shot a bunch of beautiful vistas for us.  For environments, there were shots where we had to do projections of plate material onto 3D geometry that we built. We had some DMPs that went into deep background. We also had to build out some actual legitimate 3D terrain for foreground and midground because a lot of the shots that had interaction with our hero characters rocking and back forth were shot on a bluescreen stage with a VW van on a large gimbal rig.  Then Liz had the fun job of trying to tie that into a giant robot walking with them.  We had to do some obvious tweaking to some of those motions. The establishing shots, where they are walking through this giant dead robot skeleton from who knows where, several of those were 100 percent CG. Once they get to the Mall, we had a big digital mall and a canyon area that had to look like they were once populated.”  Modifications were kept subtle.  “There were a couple of shots where we needed to move the plate VW van around a little bit,” states Bernard.  “You can’t do a lot without it starting to fall apart and lose perspective.”  “The biggest challenge was the scale and sheer number of characters needed that played a large role interacting with our human actors and creating a believable world for them to live in,” reflects Behrens.  “The sequence that I had the most fun with was the mine sequence with Herman and Keats, as far as their banter back and forth. Some of our most expansive work was the Mall and the walk across the Exclusion Zone.  Those had the most stunning visuals.”  Bernard agrees with her colleague.  “I’m going to sound like a broken record.  For me, it was the scale and the sheer number of characters that we had to deal with and keeping them feeling that they were all different, but from the same universe.  Having the animators working towards that same goal was a big challenge.  We had quite a large team on this one.  And I do love that mine sequence.  There is such good banter between Keats and Herman, especially early on in that sequence.  It has so much great action to it.  We got to drop a giant claw on top of The Marshall that he had to fight his way out of.  That was a hard shot.  And of course, the Mall is stunning.  You can see all the care that went into creating that environment and all those characters.  It’s beautiful.”      Trevor Hogg is a freelance video editor and writer best known for composing in-depth filmmaker and movie profiles for VFX Voice, Animation Magazine, and British Cinematographer. #digital #domain #goes #retrofuturistic #with
    WWW.AWN.COM
    Digital Domain Goes Retro-Futuristic with Robots on ‘The Electric State’ VFX
    In The Electric State, based on a graphic novel by Swedish artist Simon Stålenhag, after a robot uprising in an alternative version of the 1990s, an orphaned teenager goes on a quest across the American West, with a cartoon-inspired robot, a smuggler, and his sidekick, to find her long-lost brother. Adapting this sci-fi adventure for Netflix were Joe and Anthony Russo; their film stars Millie Bobbie Brown, Chris Pratt, Stanley Tucci, Giancarlo Esposito and a cast of CG automatons voiced by the likes of Woody Harrelson, Alan Tudyk, Hank Azaria, and Anthony Mackie.  Overseeing the visual effects, which surpassed what the Russos had to deal with during their halcyon MCU days, was Matthew Buttler, who turned to the venerable Digital Domain. As the main vendor, the studio was responsible for producing 61 character builds, 480 assets, and over 850 shots. “It was one of the biggest projects that I’ve done in terms of sheer volumes of assets, shots and characters,” states Joel Behrens, VFX Supervisor, Digital Domain.  “Our wonderful asset team did the 61 characters we were responsible for and had to ingest another 46 characters from other facilities.  We didn’t do any major changes. It was pushing our pipeline to the limits it could handle, especially with other shows going on. We took up a lot of disk space and had the ability to expand and contract the Renderfarm with cloud machines as well.” In researching for the show, Digital Domain visited Boston Dynamics to better understand the technological advancements in robotics, and what structures, motions, and interactions were logical and physically plausible.  “There is a certain amount of fake engineering that goes into some of these things,” notes Behrens.  “We’re not actually building these robots to legitimately function in the real world but have to be visibly believable that they can actually pull some of this stuff off.”  The starting point is always the reference material provided by the client.  “Is there a voice that I need to match to?” notes Liz Bernard, Animation Supervisor, Digital Domain.  “Is there any physical body reference either from motion reference actors in the plate or motion capture? We had a big mix of that on the show.  Some of our characters couldn’t be mocapped at all while others could but we had to modify the performance considerably.  We were also looking at the anatomy of each one of these robots to see what their physical capabilities are.  Can they run or jump?  Because that’s always going to tie tightly with the personality.  Your body in some ways is your personality.  We’re trying to figure out how do we put the actor’s voice on top of all these physical limitations in a way that feels cohesive.  It doesn’t happen overnight.”  The character design of Cosmo was retained from the graphic novel despite not being feasible to engineer in reality.  “His feet are huge,” laughs Bernard.  “We had to figure out how to get him to walk in a way that felt normal and put the joints in the right spots.” Emoting was mainly achieved through physicality.  “He does have these audio clips from the Kid Cosmo cartoon that he can use to help express himself verbally, but most of it is pantomime,” observes Bernard.  “There is this great scene between Cosmo and Michelle that occurs right after she crashes the car, and Cosmo is still trying to convince her who he is and why she should go off on this great search for her brother across the country.   We were trying to get some tough nuanced acting into these shots with a subtle head tilt or a little bit of a slump in the shoulders.”  A green light was inserted into the eyes.  “Matthew Butler likes robotic stuff and anything that we could do to make Cosmo feel more grounded in reality was helpful,” observes Behrens.  “We also wanted to prevent anyone from panicking and giving Cosmo a more animated face or allowing him to speak dialogue. We started off with a constant light at the beginning and then added this twinkle and glimmer in his eye during certain moments. We liked that and ended up putting it in more places throughout the film. Everybody says that the eyes are the windows to the soul so giving Cosmo something rather than a dark black painted spot on his face assisted in connecting with that character.”  Coming in four different sizes that fit inside one another - like a Russian doll - is Herman. Digital Domain looked after the eight-inch, four-foot and 20-foot versions while ILM was responsible for the 60-foot Herman that appears in the final battle.   “They were scaled up to a certain extent but consider that the joints on the 20-foot version of Herman versus the four-foot version need to be more robust and beefier because they’re carrying so much more weight,” remarks Bernard.  “We were focusing on making sure that the impact of each step rippled through the body in a way that made it clear how heavy a 20-foot robot carrying a van across a desert would be.  The smaller one can be nimbler and lighter on its feet.  There were similar physical limitations, but that weight was the big deal.”  Incorporated into the face of Herman is a retro-futuristic screen in the style of the 1980s and early 1990s CRT panels. “It has these RGB pixels that live under a thick plate of glass like your old television set,” explains Behrens.  “You have this beautiful reflective dome that goes over top of these cathode-ray-looking pixels that allowed us to treat it as a modern-day LED with the ability to animate his expressions, or if we wanted to, put symbols up. You could pixelized any graphical element and put it on Herman’s face.  We wanted to add a nonlinear decay into the pixels so when he changed expressions or a shape altered drastically you would have a slow quadratic decay of the pixels fading off as he switched expressions. That contributed a nice touch.” One member of the robot cast is an iconic Planters mascot.  “Everybody knows who Mr. Peanut is and what he looks like, at least in North America,” observes Behrens.  “We had to go through a lot of design iterations of how his face should animate. It was determined that as a slightly older model of robot he didn’t have a lot of dexterity in his face. We were modelling him after Chuck E. Cheese and ShowBiz Pizza animatronics, so it was like a latex shell over the top of a mechanical under structure that drove his limited expressions. It allowed him to open and close his mouth and do some slight contractions at the corners, leaving most of the acting to his eyes, which did not have as many restrictions. The eyes had the ability to move quickly, and dart and blink like a human.”  The eyebrows were mounted tracks that ran up and down a vertical slot on the front of the face.  “We could move the eyebrows up and down, and tilt them, but couldn’t do anything else,” states Bernard.  “It was trying to find a visual language that would get the acting across with Woody Harrelson’s amazing performance backing it up.  Then a lot of pantomime to go with that.”  Mr. Peanut moves in a jerky rather than smooth manner.  “Here is a funny little detail,” reveals Bernard.  “If you think about a peanut shell, he doesn’t have a chest or hips that can move independently.  We realized early on that in order to get him to walk without teeter-tottering everywhere, we were going to have to cut his butt off, reattach it and add a swivel control on the bottom.  We always kept that peanut silhouette intact; however, he could swivel his hips enough to walk forward without looking silly!”  Other notable robots are Pop Fly and Perplexo; the former is modelled on baseball player, the latter on a magician.  “We decided that Pop Fly would be the clunkiest of all robots because he was meant to be the elder statesman,” states Behrens.  “Pop Fly was partially falling apart, like his eye would drift, the mouth would hang open and sometimes he’d pass out for a second and wake back up.  Pop Fly was the scavenger hunter of the group who has seen stuff in the battles of the wasteland. We came up with a fun pitching mechanism so he could actually shoot the balls out of his mouth and of course, there was his trusty baseball bat that he could bat things with.” An interesting task was figuring out how to rig his model.  “We realized that there needed to be a lot of restrictions in his joints to make him look realistic based on how he was modelled in the first place,” notes Bernard.  “Pop Fly couldn’t rotate his head in every direction; he could turn it from side to side for the most part.  Pop Fly was on this weird structure with the four wheels on a scissor lift situation which meant that he always had to lean forward to get going and when stopping, would rock backwards.  It was fun to add all that detail in for him.”  Serving as Perplexo’s upper body is a theatrical box that he pops in and out of.  “Perplexo did not have a whole lot going on with his face,” remarks Bernard.  “It was a simple mechanical structure to his jaw, eyes, and eyelids; that meant we could push the performance with pantomime and crazy big gestures with the arms.”               A major adversary in the film is The Marshall, portrayed by Giancarlo Esposito, who remotely controls a drone that projects the face of operator onto a video screen.  “We started with a much smaller screen and had a cowboy motif for awhile, but then they decided to have a unifying design for the drones that are operated by humans versus the robots,” remarks Behrens.  “Since the artist Simon Stålenhag had done an interesting, cool design with the virtual reality helmets with that long duckbill that the humans wear in the real world, the decision was made to mimic that head style of the drones to match the drone operators. Then you could put a screen on the front; that’s how you see Ted [Jason Alexander] or The Marshall or the commando operators. It worked out quite nicely.”   There was not much differentiation in the movement of the drones.  “The drones were meant to be in the vein of Stormtroopers, a horde of them being operated by people sitting in a comfortable room in Seattle,” observes Bernard. “So, they didn’t get as much effort and love as we put into the rest of the robots which had their own personalities. But for The Marshall, we have great mocap to start from Adam Croasdell. He played it a little bit cowboy, which was how Giancarlo Esposito was portraying the character as well, like a Western sheriff style vibe. You could hear that in the voice.  Listening to Giancarlo’s vocal performance gives you a lot of clues of what you should do when you’re moving that character around.  We put all of that together in the performance of The Marshall.”   Many environments had to either be created or augmented, such as the haunted amusement park known as Happyland. “The majority of the exterior of Happyland was a beautiful set that Dennis Gassner and his crew built in a parking lot of a waterslide park in Atlanta,” states Behrens.  “We would go there at night and freeze our butts off shooting for a good two and a half weeks in the cold Atlanta winter.  Most of our environmental work was doing distance extensions for that and adding atmospherics and fog.  We made all the scavenger robots that inhabit Happyland, which are cannibalistic robotics that upgrade and hot rod themselves from random parts taken from the robots that they kill.  Once we get into the haunted house and fall into the basement, that’s where Dr. Amherst has his lab, which was modelled off a 1930s Frankenstein set, with Tesla coils, beakers, and lab equipment.  That was initially a set build we did onstage in Atlanta. But when we got into additional photography, they wanted to do this whole choreographed fight with The Marshall and Mr. Peanut. Because they didn’t know what actions we would need, we ended up building that entire lower level in CG.”   At one point, all the exiled robots gather at the Mall within the Exclusion Zone.  “We were responsible for building a number of the background characters along with Storm Studios and ILM,” remarks Behrens.  “As for the mall, we didn’t have to do much to the environment.  There were some small things here and there that had to be modified.  We took over an abandoned mall in Atlanta and the art department dressed over half of it.” The background characters were not treated haphazardly. “We assigned two or three characters to each animator,” explains Bernard.  “I asked them to make a backstory and figure out who this guy is, what does he care about, and who is his mama?!  Put that into the performance so that each one feels unique and different because they have their own personalities.  There is a big central theme in the movie where the robots are almost more human than most of the humans you meet.  It was important to us that we put that humanity into their performances. As far as the Mall and choreography, Matthew, Joel and I knew that was going to be a huge challenge because this is not traditional crowd work where you can animate cycles and give it to a crowds department and say, ‘Have a bunch of people walking around.’  All these characters are different; they have to move differently and do their own thing.  We did a first pass on the big reveal in the Mall where you swing around and see the atrium where everybody is doing their thing.  We essentially took each character and moved them around like a chess piece to figure out if we had enough characters, if the color balanced nicely across all of them, and if it was okay for us to duplicate a couple of them.  We started to show that early to Matthew and Jeffrey Ford [Editor, Executive Producer], and the directors to get buyoff on the density of the crowd.”    Considered one of the film’s signature sequences is the walk across the Exclusion Zone, where 20-foot Herman is carrying a Volkswagen van containing Michelle, Cosmo and Keats on his shoulder.  “We did a little bit of everything,” notes Behrens.  “We had plate-based shots because a splinter unit went out to Moab, Utah and shot a bunch of beautiful vistas for us.  For environments, there were shots where we had to do projections of plate material onto 3D geometry that we built. We had some DMPs that went into deep background. We also had to build out some actual legitimate 3D terrain for foreground and midground because a lot of the shots that had interaction with our hero characters rocking and back forth were shot on a bluescreen stage with a VW van on a large gimbal rig.  Then Liz had the fun job of trying to tie that into a giant robot walking with them.  We had to do some obvious tweaking to some of those motions. The establishing shots, where they are walking through this giant dead robot skeleton from who knows where, several of those were 100 percent CG. Once they get to the Mall, we had a big digital mall and a canyon area that had to look like they were once populated.”  Modifications were kept subtle.  “There were a couple of shots where we needed to move the plate VW van around a little bit,” states Bernard.  “You can’t do a lot without it starting to fall apart and lose perspective.”  “The biggest challenge was the scale and sheer number of characters needed that played a large role interacting with our human actors and creating a believable world for them to live in,” reflects Behrens.  “The sequence that I had the most fun with was the mine sequence with Herman and Keats, as far as their banter back and forth. Some of our most expansive work was the Mall and the walk across the Exclusion Zone.  Those had the most stunning visuals.”  Bernard agrees with her colleague.  “I’m going to sound like a broken record.  For me, it was the scale and the sheer number of characters that we had to deal with and keeping them feeling that they were all different, but from the same universe.  Having the animators working towards that same goal was a big challenge.  We had quite a large team on this one.  And I do love that mine sequence.  There is such good banter between Keats and Herman, especially early on in that sequence.  It has so much great action to it.  We got to drop a giant claw on top of The Marshall that he had to fight his way out of.  That was a hard shot.  And of course, the Mall is stunning.  You can see all the care that went into creating that environment and all those characters.  It’s beautiful.”      Trevor Hogg is a freelance video editor and writer best known for composing in-depth filmmaker and movie profiles for VFX Voice, Animation Magazine, and British Cinematographer.
    0 Commentaires 0 Parts
  • AU Deals: Score 3 Freebies Including Sifu, Hot Prices on RoadCraft, Doom Dark Ages, Red Deads, and More!

    Thank your own personal deity--possibly The Outer Gods--it's Friday! Winter might be creeping in, but these weekend game deals are bringing the heat. Whether you’re a Switch slasher, a PlayStation purist, an Xbox adventurer or a PC power user, there’s something worth your clicks in this mix. Stock up for the weekend and I'll catch you Monday!This Day in Gaming In retro news, I’m celebrating the 19th birthday of Metroid Prime: Hunters while remembering the mild carpal tunnel it gave me. Developed for the newly minted DS, its standout innovation was the incorporation of stylus-driven aiming paired with six-player ad-hoc wireless multiplayer, allowing us hunters to engage in fast-paced, Phazon-fueled deathmatches on the go. By blending precision touchscreen controls with diverse character-specific abilities, it instantly sold me and my mates on the concept of portable, hyper-competitive first-person shooters. We had an absoluteball with it.Aussie bdays for notable games- Def Jam Vendetta2003. eBay- Rise of Nations2003. eBay- Silent Hill 32003. eBay- Metroid Prime: Hunters2006. eBay- Resident Evil Revelations2013. eBayContentsNintendoXboxPlayStationPCPC GearLEGOHeadphonesTVsNice Savings for Nintendo SwitchNintendo fans should keep an eye on Dead Cells, the rogue-lite hit that started life as a failed Metroidvania MMORPG. Developer Motion Twin scrapped the original concept and pivoted mid-production. Meanwhile, Blasphemousstabs its way onto the scene with a brutal blend of Catholic iconography and Dark Souls inspiration. Such a gem.Lego Skywalker Saga- ADead Cells- AJurassic World Evolution: Comp. Ed.- ANamco Museum Archives Vol. 1- ABlasphemous- AExpeditions: A MudRunner- AExpiring Recent DealsHogwarts Legacy- ACrypt of the NecroDancer- AYooka-Laylee- AGuacamelee! Super Turbo Champ. Ed.- ALego Jurassic World- AOr gift a Nintendo eShop Card.Switch Console PricesHow much to Switch it up?Back to topExciting Bargains for XboxOver on Xbox Series X, Halo Infinitecontinues to carve out its redemption arc.. UFC 5, on the other hand, had EA scanning fighters in full 360 capture tech, making every punch and grimace feel uncomfortably lifelike.Doom: The Dark Ages- ARoadcraft- AHot Wheels Unleashed- AUFC 5- AHalo Infinite- AXbox OneRed Dead Redemption 2- ANo Man's Sky- AStar Wars Jedi: Survivor- AExpiring Recent DealsResident Evil 4- AStar Wars Outlaws- AMass Effect Leg. Ed.- ATB Stealth Pivot Controller- AHogwarts Legacy- ATB VelocityOne Flightstick- AGrand Theft Auto V- ATiny Tina's Wonderlands- AOr just invest in an Xbox Card.Xbox Console PricesHow many bucks for a 'Box?Back to topPure Scores for PlayStationPlayStation 5 players get a win with Tales of Arise, which features a physics system that ensures characters' hair and capes flutter just right. Pair that with NBA 2K25, where cover athlete animations were mocapped using a team of real streetballers for added flair.DualSense Chroma Indigo- ADoom: The Dark Ages- ARoadcraft- ATales of Arise- ANBA 2K25- AExpeditions: A MudRunner- AUFC 5- APS4Red Dead Redemption 2- ARed Dead Redemption- AOctopath Traveler II- AThe Yakuza Rem. Col.- AExpiring Recent DealsElden Ring: Nightreign- AKingdom Come Deliverance 2- AStar Wars Outlaws- ACyberpunk 2077: Ult. Ed.- AMonster Hunter Wilds- AAce Combat 7: Skies Unknown- ABayonetta- ACatherine: Full Body- APS+ Monthly FreebiesYours to keep from May 1 with this subscriptionArk: Survival AscendedBalatroWarhammer 40,000: BoltgunOr purchase a PS Store Card.What you'll pay to 'Station.Back to topPurchase Cheap for PCToday, PC players should snap up Sifu while it’s free. This martial arts revenge tale actually aged its protagonist in real-time for every failed attempt, a mechanic inspired by classic kung fu film tropes. And Hogwarts Legacy: Deluxehides an early design doc reference to a planned “wizarding gig economy” system that was mercifully scrapped.Sifu- FREEGigapocalypse- FREEDeliver At All Costs- FREEHogwarts Legacy Del.- ARazer Huntsman Mini keyboard- AMetaphor: ReFantazio- AExpiring Recent DealsRed Dead Redemption 2- AHogwarts Legacy- APrince of Persia: The Lost Crown- AAssassin's Creed Valhalla- AManeater- AOr just get a Steam Wallet CardPC Hardware PricesSlay your pile of shame.Laptop DealsHP Envy x360 16" 2-in-1– AHP Laptop 15.6" Ryzen– AThinkPad E14 Gen 5– ALenovo Yoga 7i Gen 9– AApple 2024 MacBook Air 15-inch– ALenovo ThinkPad E14 Gen 5- ALenovo ThinkBook 16 Gen7- ADesktop DealsLenovo neo 50q Gen 4 Tiny– ALenovo neo 50t Gen 5 Desk– ALenovo Legion Tower 5i– AMonitor DealsARZOPA 16.1" 144Hz– AZ-Edge 27" 240Hz– AGawfolk 34" WQHD– ALG 27" Ultragear– AComponent DealsMSI PRO B650M-A WiFi Motherboard– AAMD Ryzen 7 7800X3D– ACorsair Vengeance 32GB– AKingston FURY Beast 16GB– AStorage DealsSeagate One Touch Portable HDD– AKingston 1TB USB 3.2 SSD– ASanDisk 128GB Extreme PRO– ASanDisk 32GB Ultra SDHC– ABack to topLegit LEGO DealsMario Kart – Yoshi- AThe Mighty Bowser- AStar Wars R2-D2- AStar Wars Home One Starcruiser- AExpiring Recent DealsMandalorian Moff Gideon Battle- ASpace Construction Mech- AFountain Garden Building- AWilliams Racing & Haas F1 Race Cars- ABack to topHot Headphones DealsAudiophilia for lessGalaxy Buds2 Pro– ATechnics Wireless NC– ASoundPEATS Space– ASony MDR7506 Pro– ABack to topTerrific TV DealsDo right by your console, upgrade your tellyLG 43" UT80 4K– AKogan 65" QLED 4K– AKogan 55" QLED 4K– ALG 55" UT80 4K– ABack to top Adam Mathew is our Aussie deals wrangler. He plays practically everything, often on YouTube.
    #deals #score #freebies #including #sifu
    AU Deals: Score 3 Freebies Including Sifu, Hot Prices on RoadCraft, Doom Dark Ages, Red Deads, and More!
    Thank your own personal deity--possibly The Outer Gods--it's Friday! Winter might be creeping in, but these weekend game deals are bringing the heat. Whether you’re a Switch slasher, a PlayStation purist, an Xbox adventurer or a PC power user, there’s something worth your clicks in this mix. Stock up for the weekend and I'll catch you Monday!This Day in Gaming 🎂In retro news, I’m celebrating the 19th birthday of Metroid Prime: Hunters while remembering the mild carpal tunnel it gave me. Developed for the newly minted DS, its standout innovation was the incorporation of stylus-driven aiming paired with six-player ad-hoc wireless multiplayer, allowing us hunters to engage in fast-paced, Phazon-fueled deathmatches on the go. By blending precision touchscreen controls with diverse character-specific abilities, it instantly sold me and my mates on the concept of portable, hyper-competitive first-person shooters. We had an absoluteball with it.Aussie bdays for notable games- Def Jam Vendetta2003. eBay- Rise of Nations2003. eBay- Silent Hill 32003. eBay- Metroid Prime: Hunters2006. eBay- Resident Evil Revelations2013. eBayContentsNintendoXboxPlayStationPCPC GearLEGOHeadphonesTVsNice Savings for Nintendo SwitchNintendo fans should keep an eye on Dead Cells, the rogue-lite hit that started life as a failed Metroidvania MMORPG. Developer Motion Twin scrapped the original concept and pivoted mid-production. Meanwhile, Blasphemousstabs its way onto the scene with a brutal blend of Catholic iconography and Dark Souls inspiration. Such a gem.Lego Skywalker Saga- ADead Cells- AJurassic World Evolution: Comp. Ed.- ANamco Museum Archives Vol. 1- ABlasphemous- AExpeditions: A MudRunner- AExpiring Recent DealsHogwarts Legacy- ACrypt of the NecroDancer- AYooka-Laylee- AGuacamelee! Super Turbo Champ. Ed.- ALego Jurassic World- AOr gift a Nintendo eShop Card.Switch Console PricesHow much to Switch it up?Back to topExciting Bargains for XboxOver on Xbox Series X, Halo Infinitecontinues to carve out its redemption arc.. UFC 5, on the other hand, had EA scanning fighters in full 360 capture tech, making every punch and grimace feel uncomfortably lifelike.Doom: The Dark Ages- ARoadcraft- AHot Wheels Unleashed- AUFC 5- AHalo Infinite- AXbox OneRed Dead Redemption 2- ANo Man's Sky- AStar Wars Jedi: Survivor- AExpiring Recent DealsResident Evil 4- AStar Wars Outlaws- AMass Effect Leg. Ed.- ATB Stealth Pivot Controller- AHogwarts Legacy- ATB VelocityOne Flightstick- AGrand Theft Auto V- ATiny Tina's Wonderlands- AOr just invest in an Xbox Card.Xbox Console PricesHow many bucks for a 'Box?Back to topPure Scores for PlayStationPlayStation 5 players get a win with Tales of Arise, which features a physics system that ensures characters' hair and capes flutter just right. Pair that with NBA 2K25, where cover athlete animations were mocapped using a team of real streetballers for added flair.DualSense Chroma Indigo- ADoom: The Dark Ages- ARoadcraft- ATales of Arise- ANBA 2K25- AExpeditions: A MudRunner- AUFC 5- APS4Red Dead Redemption 2- ARed Dead Redemption- AOctopath Traveler II- AThe Yakuza Rem. Col.- AExpiring Recent DealsElden Ring: Nightreign- AKingdom Come Deliverance 2- AStar Wars Outlaws- ACyberpunk 2077: Ult. Ed.- AMonster Hunter Wilds- AAce Combat 7: Skies Unknown- ABayonetta- ACatherine: Full Body- APS+ Monthly FreebiesYours to keep from May 1 with this subscriptionArk: Survival AscendedBalatroWarhammer 40,000: BoltgunOr purchase a PS Store Card.What you'll pay to 'Station.Back to topPurchase Cheap for PCToday, PC players should snap up Sifu while it’s free. This martial arts revenge tale actually aged its protagonist in real-time for every failed attempt, a mechanic inspired by classic kung fu film tropes. And Hogwarts Legacy: Deluxehides an early design doc reference to a planned “wizarding gig economy” system that was mercifully scrapped.Sifu- FREEGigapocalypse- FREEDeliver At All Costs- FREEHogwarts Legacy Del.- ARazer Huntsman Mini keyboard- AMetaphor: ReFantazio- AExpiring Recent DealsRed Dead Redemption 2- AHogwarts Legacy- APrince of Persia: The Lost Crown- AAssassin's Creed Valhalla- AManeater- AOr just get a Steam Wallet CardPC Hardware PricesSlay your pile of shame.Laptop DealsHP Envy x360 16" 2-in-1– AHP Laptop 15.6" Ryzen– AThinkPad E14 Gen 5– ALenovo Yoga 7i Gen 9– AApple 2024 MacBook Air 15-inch– ALenovo ThinkPad E14 Gen 5- ALenovo ThinkBook 16 Gen7- ADesktop DealsLenovo neo 50q Gen 4 Tiny– ALenovo neo 50t Gen 5 Desk– ALenovo Legion Tower 5i– AMonitor DealsARZOPA 16.1" 144Hz– AZ-Edge 27" 240Hz– AGawfolk 34" WQHD– ALG 27" Ultragear– AComponent DealsMSI PRO B650M-A WiFi Motherboard– AAMD Ryzen 7 7800X3D– ACorsair Vengeance 32GB– AKingston FURY Beast 16GB– AStorage DealsSeagate One Touch Portable HDD– AKingston 1TB USB 3.2 SSD– ASanDisk 128GB Extreme PRO– ASanDisk 32GB Ultra SDHC– ABack to topLegit LEGO DealsMario Kart – Yoshi- AThe Mighty Bowser- AStar Wars R2-D2- AStar Wars Home One Starcruiser- AExpiring Recent DealsMandalorian Moff Gideon Battle- ASpace Construction Mech- AFountain Garden Building- AWilliams Racing & Haas F1 Race Cars- ABack to topHot Headphones DealsAudiophilia for lessGalaxy Buds2 Pro– ATechnics Wireless NC– ASoundPEATS Space– ASony MDR7506 Pro– ABack to topTerrific TV DealsDo right by your console, upgrade your tellyLG 43" UT80 4K– AKogan 65" QLED 4K– AKogan 55" QLED 4K– ALG 55" UT80 4K– ABack to top Adam Mathew is our Aussie deals wrangler. He plays practically everything, often on YouTube. #deals #score #freebies #including #sifu
    WWW.IGN.COM
    AU Deals: Score 3 Freebies Including Sifu, Hot Prices on RoadCraft, Doom Dark Ages, Red Deads, and More!
    Thank your own personal deity--possibly The Outer Gods--it's Friday! Winter might be creeping in, but these weekend game deals are bringing the heat. Whether you’re a Switch slasher, a PlayStation purist, an Xbox adventurer or a PC power user, there’s something worth your clicks in this mix. Stock up for the weekend and I'll catch you Monday!This Day in Gaming 🎂In retro news, I’m celebrating the 19th birthday of Metroid Prime: Hunters while remembering the mild carpal tunnel it gave me. Developed for the newly minted DS, its standout innovation was the incorporation of stylus-driven aiming paired with six-player ad-hoc wireless multiplayer, allowing us hunters to engage in fast-paced, Phazon-fueled deathmatches on the go. By blending precision touchscreen controls with diverse character-specific abilities, it instantly sold me and my mates on the concept of portable, hyper-competitive first-person shooters. We had an absolute (morph) ball with it.Aussie bdays for notable games- Def Jam Vendetta (PS2) 2003. eBay- Rise of Nations (PC) 2003. eBay- Silent Hill 3 (PS2) 2003. eBay- Metroid Prime: Hunters (DS) 2006. eBay- Resident Evil Revelations (3DS) 2013. eBayContentsNintendoXboxPlayStationPCPC GearLEGOHeadphonesTVsNice Savings for Nintendo SwitchNintendo fans should keep an eye on Dead Cells (A$18.70), the rogue-lite hit that started life as a failed Metroidvania MMORPG. Developer Motion Twin scrapped the original concept and pivoted mid-production. Meanwhile, Blasphemous (A$9.30) stabs its way onto the scene with a brutal blend of Catholic iconography and Dark Souls inspiration. Such a gem.Lego Skywalker Saga (-80%) - A$17.90Dead Cells (-50%) - A$18.70Jurassic World Evolution: Comp. Ed. (-80%) - A$16.90Namco Museum Archives Vol. 1 (-84%) - A$4.70Blasphemous (-75%) - A$9.30Expeditions: A MudRunner (-42%) - A$49Expiring Recent DealsHogwarts Legacy (-34%) - A$59Crypt of the NecroDancer (-80%) - A$6Yooka-Laylee (-80%) - A$6.50Guacamelee! Super Turbo Champ. Ed. (-75%) - A$5Lego Jurassic World (-90%) - A$5.90Or gift a Nintendo eShop Card.Switch Console PricesHow much to Switch it up?Back to topExciting Bargains for XboxOver on Xbox Series X, Halo Infinite (A$32.90) continues to carve out its redemption arc.. UFC 5 (A$39.00), on the other hand, had EA scanning fighters in full 360 capture tech, making every punch and grimace feel uncomfortably lifelike.Doom: The Dark Ages (-17%) - A$99Roadcraft (-18%) - A$49Hot Wheels Unleashed (-29%) - A$39UFC 5 (-65%) - A$39Halo Infinite (-67%) - A$32.90Xbox OneRed Dead Redemption 2 (-73%) - A$24No Man's Sky (-60%) - A$35.90Star Wars Jedi: Survivor (-60%) - A$29.90Expiring Recent DealsResident Evil 4 (-47%) - A$31.40Star Wars Outlaws (-64%) - A$40.00Mass Effect Leg. Ed. (-90%) - A$9.90TB Stealth Pivot Controller (-33%) - A$168.40Hogwarts Legacy (-55%) - A$49.00TB VelocityOne Flightstick (-18%) - A$205.60Grand Theft Auto V (-52%) - A$29.00Tiny Tina's Wonderlands (-90%) - A$10.00Or just invest in an Xbox Card.Xbox Console PricesHow many bucks for a 'Box?Back to topPure Scores for PlayStationPlayStation 5 players get a win with Tales of Arise (A$31.00), which features a physics system that ensures characters' hair and capes flutter just right. Pair that with NBA 2K25 (A$34.00), where cover athlete animations were mocapped using a team of real streetballers for added flair.DualSense Chroma Indigo (-12%) - A$109.90Doom: The Dark Ages (-17%) - A$99Roadcraft (-18%) - A$49Tales of Arise (-69%) - A$31NBA 2K25 (-72%) - A$34Expeditions: A MudRunner (-42%) - A$49UFC 5 (-65%) - A$39PS4Red Dead Redemption 2 (-73%) - A$24Red Dead Redemption (-48%) - A$39Octopath Traveler II (-32%) - A$57.70The Yakuza Rem. Col. (-26%) - A$40.30Expiring Recent DealsElden Ring: Nightreign (-9%) - A$64.00Kingdom Come Deliverance 2 (-19%) - A$89.00Star Wars Outlaws (-64%) - A$40.00Cyberpunk 2077: Ult. Ed. (-20%) - A$84.00Monster Hunter Wilds (-26%) - A$89.00Ace Combat 7: Skies Unknown (-73%) - A$26.80Bayonetta (-75%) - A$9.40Catherine: Full Body (-80%) - A$10.90PS+ Monthly FreebiesYours to keep from May 1 with this subscriptionArk: Survival Ascended (PS5)Balatro (PS5/PS4)Warhammer 40,000: Boltgun (PS5/PS4)Or purchase a PS Store Card.What you'll pay to 'Station.Back to topPurchase Cheap for PCToday, PC players should snap up Sifu while it’s free. This martial arts revenge tale actually aged its protagonist in real-time for every failed attempt, a mechanic inspired by classic kung fu film tropes. And Hogwarts Legacy: Deluxe (A$21.20) hides an early design doc reference to a planned “wizarding gig economy” system that was mercifully scrapped.Sifu (-100%) - FREEGigapocalypse (-100%) - FREEDeliver At All Costs (-100%) - FREEHogwarts Legacy Del. (-79%) - A$21.20Razer Huntsman Mini keyboard (-23%) - A$137Metaphor: ReFantazio (-30%) - A$80.40Expiring Recent DealsRed Dead Redemption 2 (-75%) - A$22.40Hogwarts Legacy (-75%) - A$22.40Prince of Persia: The Lost Crown (-50%) - A$29.90Assassin's Creed Valhalla (-75%) - A$22.40Maneater (-80%) - A$11.30Or just get a Steam Wallet CardPC Hardware PricesSlay your pile of shame.Laptop DealsHP Envy x360 16" 2-in-1 (-39%) – A$1,399HP Laptop 15.6" Ryzen (-34%) – A$1,049ThinkPad E14 Gen 5 (-35%) – A$869Lenovo Yoga 7i Gen 9 (-41%) – A$1,229Apple 2024 MacBook Air 15-inch (-16%) – A$2,094Lenovo ThinkPad E14 Gen 5 (-36%) - A$879Lenovo ThinkBook 16 Gen7 (-27%) - A$1,018Desktop DealsLenovo neo 50q Gen 4 Tiny (-35%) – A$639Lenovo neo 50t Gen 5 Desk (-20%) – A$871.20Lenovo Legion Tower 5i (-29%) – A$1,899Monitor DealsARZOPA 16.1" 144Hz (-55%) – A$159.99Z-Edge 27" 240Hz (-15%) – A$237.99Gawfolk 34" WQHD (-28%) – A$359LG 27" Ultragear (-42%) – A$349Component DealsMSI PRO B650M-A WiFi Motherboard (-41%) – A$229AMD Ryzen 7 7800X3D (-7%) – A$876Corsair Vengeance 32GB (-35%) – A$82Kingston FURY Beast 16GB (-30%) – A$48Storage DealsSeagate One Touch Portable HDD (-24%) – A$228Kingston 1TB USB 3.2 SSD (-17%) – A$115SanDisk 128GB Extreme PRO (-63%) – A$29SanDisk 32GB Ultra SDHC (-53%) – A$9.90Back to topLegit LEGO DealsMario Kart – Yoshi (-25%) - A$15.00The Mighty Bowser (-14%) - A$345.00Star Wars R2-D2 (-30%) - A$139.00Star Wars Home One Starcruiser (-19%) - A$89.00Expiring Recent DealsMandalorian Moff Gideon Battle (-42%) - A$35.00Space Construction Mech (-33%) - A$10.00Fountain Garden Building (-28%) - A$129.90Williams Racing & Haas F1 Race Cars (-27%) - A$22.00Back to topHot Headphones DealsAudiophilia for lessGalaxy Buds2 Pro (-31%) – A$239Technics Wireless NC (-33%) – A$365SoundPEATS Space (-25%) – A$56.99Sony MDR7506 Pro (-18%) – A$199Back to topTerrific TV DealsDo right by your console, upgrade your tellyLG 43" UT80 4K (-24%) – A$635Kogan 65" QLED 4K (-50%) – A$699Kogan 55" QLED 4K (-45%) – A$549LG 55" UT80 4K (-28%) – A$866Back to top Adam Mathew is our Aussie deals wrangler. He plays practically everything, often on YouTube.
    0 Commentaires 0 Parts
  • How an airfield in the UK was turned into the Iraqi city of Ramadi for Alex Garland’s ‘Warfare’

    Behind the film’s invisible visual effects by Cinesite, including environments, aerial surveillance footage and those stunning F-18 show of force shots. 
    Warfare, written and directed by Ray Mendoza and Alex Garland, is based on Mendoza’s own experiences as a US Navy SEAL in a deadly moment during the Iraq War. It follows the action as a Navy SEAL platoon takes over a suburban Ramadi street before they come under attack. When they attempt to flee and call in a Bradley Fighting Vehicle, an IED explosion results in severe casualties and a further rescue.
    Cinesite, led by visual effects supervisor Simon Stanley-Clamp, was responsible for the film’s visual effects. This ranged from taking original plates for the house and surrounding street areas shot at at an airfield and fleshing out the environment to resemble the Iraqi city, to realizing gunfire and weapon hits, and some dramatic ‘show of force’ F-18 shots. 

    Here’s how they did it.
    The shoot
    The film was shot at Bovingdon Airfield Studios in Hertfordshire, UK. The Ramadi street set was completely built there as an outdoor location in the airfield’s car park. “Initially,” recounts Stanley-Clamp, “the plan was to build just the one house and digitally do the rest. But then the plan went to six houses in close quarters around the hero house where the incident takes place. It grew finally to eight houses, one was a complete working house, with a working staircase, and then the houses off it are flattage, but good flattage with enough depth to work.”

    Surrounding the housing set were two ‘massive’ bluescreens, as Stanley-Clamp puts it. “They were 20 feet high by 120 feet wide. Then I had a couple of floating bluescreens on Manitou’s that we could drive in to plug a gap here and there.”
    Cinesite was then responsible for extending the street environment and completing some sky replacements. “Production designer Mark Digby’s set was so well-built,” says Stanley-Clamp. “Sometimes, with a set for a castle or something like that, when you get up close to the set, you can tell it’s plaster and wood and canvas. But the textures we sourced from Mark’s set are what we use to duplicate and replicate out the rest of the build in CG. Our build was completely based on their architecture.”

    The IED explosion
    As a Bradley Fighting Vehicle arrives at the house and members of the platoon leave to enter it, an IED fixed to a lamppost is detonated next to the tank. Special effects supervisor Ryan Conder orchestrated the explosion. “It was shot with a lot of dust and debris and with light bulbs inside so that it was very bright,” says Stanley-Clamp. “We had prepped visual effects simulations to add to the dust and debris, but Alex essentially said, ‘No, it works, that’s what I want.’ What we did add was some burning phosphorus that stays alight for around four minutes. There was also a small pick-up bluescreen shot for a soldier falling.”

    “After the main explosion,” continues Stanley-Clamp, “there’s the moment where there’s just a lot of smoke. We added about 45% more smoke and tiny particulate, so small you barely register it, but you “feel” it’s presence. There’s a lot of subtle compositing work going on inside there. At one point, two of the soldiers are standing almost next to each other, but they don’t know that they’re standing next to each other. So we were having to roto each soldier off the plate and then layer smoke back over them and then reveal them and push them back. It was a lot of fine-tuning.”
    Prosthetics designer Tristan Versluis delivered a number of prosthetics and bloody make-up effects for the resulting IED explosion injuries. Cinesite’s contributions here were only minimal, advises Stanley-Clamp. “There’s one particular shot where we put in a fluid blood sim running as a character cuts open the trouser leg. Arterial veins and things should be pumping a little bit of blood, so we put some fluid blood in running off the wound and a couple of other little embellishments.”

    Show of force
    At three points in the film, platoon members call in a ‘show of force’ from an F-18, which involves a loud fly-by the house designed to intimidate those surrounding it with an almighty sound and pulsating wave of dust and debris. 

    “The show of force was going to happen only once,” notes Stanley-Clamp, “and it was one of those shots where we were told, ‘You won’t  see the jet, just hear it.’ Well, in the end, they wanted a trailer-type shot for this. Also, that first show of force is the only time we used a bit of fancy camera kit where we were on a long arm and dropped the camera down. Usually, we were right there all the time with the platoon. For that shot, Alex said, ‘Faster, faster, faster—what happens if you run it double speed?’ We ran it double speed and it worked.” 
    “We worked with the physics of the environment and we measured everything out,” adds Stanley-Clamp. “I mean, it’s traveling at something like 400 miles an hour. With the camera coming down, there was actually a weird optical illusion. It made it look like the plane was going up. So we had to do some tricky things to make that work.”
    For the resulting wave of dust and debris, Cinesite had Lidar scans of the set, and used a model of the houses and street to aid in simulations and extra backdrafts, utilizing Houdini. Says Stanley-Clamp: “We even went in and added moving palm trees, put more sand on the ground that could lift up, and then would scrape it back so you are left with patches of exposed ground.”
    Stanley-Clamp’s other main memory of those show of force moments was the sound. “So, the set was rigged for sound, meaning, the sound was built into our set. When that show of force happened, the first time it happened, I was looking for that fucking jet! Where did that come from?! It was absolutely deafening. Same goes for the call to prayer, the dogs barking, people chattering out in the street, it was all there.”
    Aerial surveillance
    Inside the house, the platoon has a computer with aerial maps and surveillance of their location, showing the house from above and movement around it. These screens were initially intended to only be featured briefly, but Stanley-Clamp took it upon himself to prepare some graphics that could be played back during the shoot. “In editorial,” he says, “they started to cut in the graphics that had been made, and they found it really helped with the exposition. So, they needed more.”

    Using the CG models constructed for the set extensions, Cinesite expanded the buildings out to a full grid of streets and residences. Adding in soldiers and other people was then necessary for the surveillance screens. At a pick-up shoot back at Bovingdon airfield, this time on the actual runway, a large 400 foot long bluescreen was laid on the ground, and a drone used to film action from 200 feet up in the air. 
    “We used this to film the equivalent of running down a street,” outlines Stanley-Clamp. “We had actually previs’d it with walk cycles that I had generated myself, but Alex said, ‘It’s got to be real people. Not mocap. You can tell they’re pixels, you can tell.’ For a day shoot, I got hours and hours and hours of footage, which I could never have generated in CG. Plus, I got whole platoons to walk down the street, not just individual people.” 
    The result was a collection of elements that brought some realistic-looking parallax to the surveillance screens, suggests Stanley-Clamp. “In fact, at one of the test screenings with some marines, the feedback from them was, ‘Where did you get hold of the footage? It’s so good.’ They thought it was real footage.”
    For the actual look of the footage, Cinesite consulted with Mendoza on whether it took on an infrared, ultraviolet or ‘heat seek’ look, the latter of which is what they settled on. “Right up until very close to the end,” notes Stanley-Clamp, “I thought, ‘It’s not quite right.’ So we grunged it down and raised it back up. It was looking too clean. We had to remember this was set in 2016 and effectively the tech then is a little different. You can buy night-vision goggles now or shoot night vision with drones and the quality’s ridiculous. But we had to go back to the reference, although we found that it can be hard to find that old reference.” 
    Subtle effects
    Warfare’s use of subtle visual effects extended also to weaponry. For shots requiring the Bradleys to fire from their central gun barrels, Cinesite provided a large muzzle flash and resulting smoke, timed to practical explosions rigged to buildings. Gunshots and muzzle flashes were also added to soldier firearms, along with accompanying CG bullet, phosphors and masonry hits. 

    Cinesite’s muzzle flashes related directly to the choice of camera. The film was shot largely on a DJI Ronin lightweight camerathat allowed for fast set-ups and being able to maneuver in small spaces. “We did some experiments and found that shooting at 30 fps gave us the best retention of muzzle flashes,” explains Stanley-Clamp. “You would not always see a muzzle flash go off, so sometimes we’ve enhanced a muzzle flash that’s in there or put additional ones in.”
    “It was a bit different for something like tracer fire,” adds Stanley-Clamp. “You might think trace fire is there the whole time. It’s not. It’s about every fifth shell that goes off, that’s where you will get a tracer fire. Alex would be counting them. ‘No…no…now!’ That was a good learning curve.” 

    The post How an airfield in the UK was turned into the Iraqi city of Ramadi for Alex Garland’s ‘Warfare’ appeared first on befores & afters.
    #how #airfield #was #turned #into
    How an airfield in the UK was turned into the Iraqi city of Ramadi for Alex Garland’s ‘Warfare’
    Behind the film’s invisible visual effects by Cinesite, including environments, aerial surveillance footage and those stunning F-18 show of force shots.  Warfare, written and directed by Ray Mendoza and Alex Garland, is based on Mendoza’s own experiences as a US Navy SEAL in a deadly moment during the Iraq War. It follows the action as a Navy SEAL platoon takes over a suburban Ramadi street before they come under attack. When they attempt to flee and call in a Bradley Fighting Vehicle, an IED explosion results in severe casualties and a further rescue. Cinesite, led by visual effects supervisor Simon Stanley-Clamp, was responsible for the film’s visual effects. This ranged from taking original plates for the house and surrounding street areas shot at at an airfield and fleshing out the environment to resemble the Iraqi city, to realizing gunfire and weapon hits, and some dramatic ‘show of force’ F-18 shots.  Here’s how they did it. The shoot The film was shot at Bovingdon Airfield Studios in Hertfordshire, UK. The Ramadi street set was completely built there as an outdoor location in the airfield’s car park. “Initially,” recounts Stanley-Clamp, “the plan was to build just the one house and digitally do the rest. But then the plan went to six houses in close quarters around the hero house where the incident takes place. It grew finally to eight houses, one was a complete working house, with a working staircase, and then the houses off it are flattage, but good flattage with enough depth to work.” Surrounding the housing set were two ‘massive’ bluescreens, as Stanley-Clamp puts it. “They were 20 feet high by 120 feet wide. Then I had a couple of floating bluescreens on Manitou’s that we could drive in to plug a gap here and there.” Cinesite was then responsible for extending the street environment and completing some sky replacements. “Production designer Mark Digby’s set was so well-built,” says Stanley-Clamp. “Sometimes, with a set for a castle or something like that, when you get up close to the set, you can tell it’s plaster and wood and canvas. But the textures we sourced from Mark’s set are what we use to duplicate and replicate out the rest of the build in CG. Our build was completely based on their architecture.” The IED explosion As a Bradley Fighting Vehicle arrives at the house and members of the platoon leave to enter it, an IED fixed to a lamppost is detonated next to the tank. Special effects supervisor Ryan Conder orchestrated the explosion. “It was shot with a lot of dust and debris and with light bulbs inside so that it was very bright,” says Stanley-Clamp. “We had prepped visual effects simulations to add to the dust and debris, but Alex essentially said, ‘No, it works, that’s what I want.’ What we did add was some burning phosphorus that stays alight for around four minutes. There was also a small pick-up bluescreen shot for a soldier falling.” “After the main explosion,” continues Stanley-Clamp, “there’s the moment where there’s just a lot of smoke. We added about 45% more smoke and tiny particulate, so small you barely register it, but you “feel” it’s presence. There’s a lot of subtle compositing work going on inside there. At one point, two of the soldiers are standing almost next to each other, but they don’t know that they’re standing next to each other. So we were having to roto each soldier off the plate and then layer smoke back over them and then reveal them and push them back. It was a lot of fine-tuning.” Prosthetics designer Tristan Versluis delivered a number of prosthetics and bloody make-up effects for the resulting IED explosion injuries. Cinesite’s contributions here were only minimal, advises Stanley-Clamp. “There’s one particular shot where we put in a fluid blood sim running as a character cuts open the trouser leg. Arterial veins and things should be pumping a little bit of blood, so we put some fluid blood in running off the wound and a couple of other little embellishments.” Show of force At three points in the film, platoon members call in a ‘show of force’ from an F-18, which involves a loud fly-by the house designed to intimidate those surrounding it with an almighty sound and pulsating wave of dust and debris.  “The show of force was going to happen only once,” notes Stanley-Clamp, “and it was one of those shots where we were told, ‘You won’t  see the jet, just hear it.’ Well, in the end, they wanted a trailer-type shot for this. Also, that first show of force is the only time we used a bit of fancy camera kit where we were on a long arm and dropped the camera down. Usually, we were right there all the time with the platoon. For that shot, Alex said, ‘Faster, faster, faster—what happens if you run it double speed?’ We ran it double speed and it worked.”  “We worked with the physics of the environment and we measured everything out,” adds Stanley-Clamp. “I mean, it’s traveling at something like 400 miles an hour. With the camera coming down, there was actually a weird optical illusion. It made it look like the plane was going up. So we had to do some tricky things to make that work.” For the resulting wave of dust and debris, Cinesite had Lidar scans of the set, and used a model of the houses and street to aid in simulations and extra backdrafts, utilizing Houdini. Says Stanley-Clamp: “We even went in and added moving palm trees, put more sand on the ground that could lift up, and then would scrape it back so you are left with patches of exposed ground.” Stanley-Clamp’s other main memory of those show of force moments was the sound. “So, the set was rigged for sound, meaning, the sound was built into our set. When that show of force happened, the first time it happened, I was looking for that fucking jet! Where did that come from?! It was absolutely deafening. Same goes for the call to prayer, the dogs barking, people chattering out in the street, it was all there.” Aerial surveillance Inside the house, the platoon has a computer with aerial maps and surveillance of their location, showing the house from above and movement around it. These screens were initially intended to only be featured briefly, but Stanley-Clamp took it upon himself to prepare some graphics that could be played back during the shoot. “In editorial,” he says, “they started to cut in the graphics that had been made, and they found it really helped with the exposition. So, they needed more.” Using the CG models constructed for the set extensions, Cinesite expanded the buildings out to a full grid of streets and residences. Adding in soldiers and other people was then necessary for the surveillance screens. At a pick-up shoot back at Bovingdon airfield, this time on the actual runway, a large 400 foot long bluescreen was laid on the ground, and a drone used to film action from 200 feet up in the air.  “We used this to film the equivalent of running down a street,” outlines Stanley-Clamp. “We had actually previs’d it with walk cycles that I had generated myself, but Alex said, ‘It’s got to be real people. Not mocap. You can tell they’re pixels, you can tell.’ For a day shoot, I got hours and hours and hours of footage, which I could never have generated in CG. Plus, I got whole platoons to walk down the street, not just individual people.”  The result was a collection of elements that brought some realistic-looking parallax to the surveillance screens, suggests Stanley-Clamp. “In fact, at one of the test screenings with some marines, the feedback from them was, ‘Where did you get hold of the footage? It’s so good.’ They thought it was real footage.” For the actual look of the footage, Cinesite consulted with Mendoza on whether it took on an infrared, ultraviolet or ‘heat seek’ look, the latter of which is what they settled on. “Right up until very close to the end,” notes Stanley-Clamp, “I thought, ‘It’s not quite right.’ So we grunged it down and raised it back up. It was looking too clean. We had to remember this was set in 2016 and effectively the tech then is a little different. You can buy night-vision goggles now or shoot night vision with drones and the quality’s ridiculous. But we had to go back to the reference, although we found that it can be hard to find that old reference.”  Subtle effects Warfare’s use of subtle visual effects extended also to weaponry. For shots requiring the Bradleys to fire from their central gun barrels, Cinesite provided a large muzzle flash and resulting smoke, timed to practical explosions rigged to buildings. Gunshots and muzzle flashes were also added to soldier firearms, along with accompanying CG bullet, phosphors and masonry hits.  Cinesite’s muzzle flashes related directly to the choice of camera. The film was shot largely on a DJI Ronin lightweight camerathat allowed for fast set-ups and being able to maneuver in small spaces. “We did some experiments and found that shooting at 30 fps gave us the best retention of muzzle flashes,” explains Stanley-Clamp. “You would not always see a muzzle flash go off, so sometimes we’ve enhanced a muzzle flash that’s in there or put additional ones in.” “It was a bit different for something like tracer fire,” adds Stanley-Clamp. “You might think trace fire is there the whole time. It’s not. It’s about every fifth shell that goes off, that’s where you will get a tracer fire. Alex would be counting them. ‘No…no…now!’ That was a good learning curve.”  The post How an airfield in the UK was turned into the Iraqi city of Ramadi for Alex Garland’s ‘Warfare’ appeared first on befores & afters. #how #airfield #was #turned #into
    BEFORESANDAFTERS.COM
    How an airfield in the UK was turned into the Iraqi city of Ramadi for Alex Garland’s ‘Warfare’
    Behind the film’s invisible visual effects by Cinesite, including environments, aerial surveillance footage and those stunning F-18 show of force shots.  Warfare, written and directed by Ray Mendoza and Alex Garland, is based on Mendoza’s own experiences as a US Navy SEAL in a deadly moment during the Iraq War. It follows the action as a Navy SEAL platoon takes over a suburban Ramadi street before they come under attack. When they attempt to flee and call in a Bradley Fighting Vehicle, an IED explosion results in severe casualties and a further rescue. Cinesite, led by visual effects supervisor Simon Stanley-Clamp, was responsible for the film’s visual effects. This ranged from taking original plates for the house and surrounding street areas shot at at an airfield and fleshing out the environment to resemble the Iraqi city, to realizing gunfire and weapon hits, and some dramatic ‘show of force’ F-18 shots.  Here’s how they did it. The shoot The film was shot at Bovingdon Airfield Studios in Hertfordshire, UK. The Ramadi street set was completely built there as an outdoor location in the airfield’s car park. “Initially,” recounts Stanley-Clamp, “the plan was to build just the one house and digitally do the rest. But then the plan went to six houses in close quarters around the hero house where the incident takes place. It grew finally to eight houses, one was a complete working house, with a working staircase, and then the houses off it are flattage, but good flattage with enough depth to work.” Surrounding the housing set were two ‘massive’ bluescreens, as Stanley-Clamp puts it. “They were 20 feet high by 120 feet wide. Then I had a couple of floating bluescreens on Manitou’s that we could drive in to plug a gap here and there.” Cinesite was then responsible for extending the street environment and completing some sky replacements. “Production designer Mark Digby’s set was so well-built,” says Stanley-Clamp. “Sometimes, with a set for a castle or something like that, when you get up close to the set, you can tell it’s plaster and wood and canvas. But the textures we sourced from Mark’s set are what we use to duplicate and replicate out the rest of the build in CG. Our build was completely based on their architecture.” The IED explosion As a Bradley Fighting Vehicle arrives at the house and members of the platoon leave to enter it, an IED fixed to a lamppost is detonated next to the tank. Special effects supervisor Ryan Conder orchestrated the explosion. “It was shot with a lot of dust and debris and with light bulbs inside so that it was very bright,” says Stanley-Clamp. “We had prepped visual effects simulations to add to the dust and debris, but Alex essentially said, ‘No, it works, that’s what I want.’ What we did add was some burning phosphorus that stays alight for around four minutes. There was also a small pick-up bluescreen shot for a soldier falling.” “After the main explosion,” continues Stanley-Clamp, “there’s the moment where there’s just a lot of smoke. We added about 45% more smoke and tiny particulate, so small you barely register it, but you “feel” it’s presence. There’s a lot of subtle compositing work going on inside there. At one point, two of the soldiers are standing almost next to each other, but they don’t know that they’re standing next to each other. So we were having to roto each soldier off the plate and then layer smoke back over them and then reveal them and push them back. It was a lot of fine-tuning.” Prosthetics designer Tristan Versluis delivered a number of prosthetics and bloody make-up effects for the resulting IED explosion injuries. Cinesite’s contributions here were only minimal, advises Stanley-Clamp. “There’s one particular shot where we put in a fluid blood sim running as a character cuts open the trouser leg. Arterial veins and things should be pumping a little bit of blood, so we put some fluid blood in running off the wound and a couple of other little embellishments.” Show of force At three points in the film, platoon members call in a ‘show of force’ from an F-18, which involves a loud fly-by the house designed to intimidate those surrounding it with an almighty sound and pulsating wave of dust and debris.  “The show of force was going to happen only once,” notes Stanley-Clamp, “and it was one of those shots where we were told, ‘You won’t  see the jet, just hear it.’ Well, in the end, they wanted a trailer-type shot for this. Also, that first show of force is the only time we used a bit of fancy camera kit where we were on a long arm and dropped the camera down. Usually, we were right there all the time with the platoon. For that shot, Alex said, ‘Faster, faster, faster—what happens if you run it double speed?’ We ran it double speed and it worked.”  “We worked with the physics of the environment and we measured everything out,” adds Stanley-Clamp. “I mean, it’s traveling at something like 400 miles an hour. With the camera coming down, there was actually a weird optical illusion. It made it look like the plane was going up. So we had to do some tricky things to make that work.” For the resulting wave of dust and debris, Cinesite had Lidar scans of the set, and used a model of the houses and street to aid in simulations and extra backdrafts, utilizing Houdini. Says Stanley-Clamp: “We even went in and added moving palm trees, put more sand on the ground that could lift up, and then would scrape it back so you are left with patches of exposed ground.” Stanley-Clamp’s other main memory of those show of force moments was the sound. “So, the set was rigged for sound, meaning, the sound was built into our set. When that show of force happened, the first time it happened, I was looking for that fucking jet! Where did that come from?! It was absolutely deafening. Same goes for the call to prayer, the dogs barking, people chattering out in the street, it was all there.” Aerial surveillance Inside the house, the platoon has a computer with aerial maps and surveillance of their location, showing the house from above and movement around it. These screens were initially intended to only be featured briefly, but Stanley-Clamp took it upon himself to prepare some graphics that could be played back during the shoot. “In editorial,” he says, “they started to cut in the graphics that had been made, and they found it really helped with the exposition. So, they needed more.” Using the CG models constructed for the set extensions, Cinesite expanded the buildings out to a full grid of streets and residences. Adding in soldiers and other people was then necessary for the surveillance screens. At a pick-up shoot back at Bovingdon airfield, this time on the actual runway, a large 400 foot long bluescreen was laid on the ground, and a drone used to film action from 200 feet up in the air.  “We used this to film the equivalent of running down a street,” outlines Stanley-Clamp. “We had actually previs’d it with walk cycles that I had generated myself, but Alex said, ‘It’s got to be real people. Not mocap. You can tell they’re pixels, you can tell.’ For a day shoot, I got hours and hours and hours of footage, which I could never have generated in CG. Plus, I got whole platoons to walk down the street, not just individual people.”  The result was a collection of elements that brought some realistic-looking parallax to the surveillance screens, suggests Stanley-Clamp. “In fact, at one of the test screenings with some marines, the feedback from them was, ‘Where did you get hold of the footage? It’s so good.’ They thought it was real footage.” For the actual look of the footage, Cinesite consulted with Mendoza on whether it took on an infrared, ultraviolet or ‘heat seek’ look, the latter of which is what they settled on. “Right up until very close to the end,” notes Stanley-Clamp, “I thought, ‘It’s not quite right.’ So we grunged it down and raised it back up. It was looking too clean. We had to remember this was set in 2016 and effectively the tech then is a little different. You can buy night-vision goggles now or shoot night vision with drones and the quality’s ridiculous. But we had to go back to the reference, although we found that it can be hard to find that old reference.”  Subtle effects Warfare’s use of subtle visual effects extended also to weaponry. For shots requiring the Bradleys to fire from their central gun barrels, Cinesite provided a large muzzle flash and resulting smoke, timed to practical explosions rigged to buildings. Gunshots and muzzle flashes were also added to soldier firearms, along with accompanying CG bullet, phosphors and masonry hits.  Cinesite’s muzzle flashes related directly to the choice of camera. The film was shot largely on a DJI Ronin lightweight camera (the DOP was David J. Thompson) that allowed for fast set-ups and being able to maneuver in small spaces. “We did some experiments and found that shooting at 30 fps gave us the best retention of muzzle flashes,” explains Stanley-Clamp. “You would not always see a muzzle flash go off, so sometimes we’ve enhanced a muzzle flash that’s in there or put additional ones in.” “It was a bit different for something like tracer fire,” adds Stanley-Clamp. “You might think trace fire is there the whole time. It’s not. It’s about every fifth shell that goes off, that’s where you will get a tracer fire. Alex would be counting them. ‘No…no…now!’ That was a good learning curve.”  The post How an airfield in the UK was turned into the Iraqi city of Ramadi for Alex Garland’s ‘Warfare’ appeared first on befores & afters.
    0 Commentaires 0 Parts
  • MetaHuman MOCAP + UE5 Camera Operator Simulator

    Breakdown of using Vicon Motion Captureand MetaHumans in Unreal Engine 5.5 to make a real time camera operator training simulatior.

    The app is called "Cam Op Simulator" and it's available on Steam for PC and Mac OS Silicon.

    0:00 - Intro
    0:40 - MOCAP Studio
    2:27 - MOCAP Shoot
    5:33 - MetaHuman Retargeting
    9:11 - UE5 Sequencer Assemble
    14:03 - Cam Op Sim Live Demo
    17:49 - Conclusion
    #metahuman #mocap #ue5 #camera #operator
    MetaHuman MOCAP + UE5 Camera Operator Simulator
    Breakdown of using Vicon Motion Captureand MetaHumans in Unreal Engine 5.5 to make a real time camera operator training simulatior. The app is called "Cam Op Simulator" and it's available on Steam for PC and Mac OS Silicon. 0:00 - Intro 0:40 - MOCAP Studio 2:27 - MOCAP Shoot 5:33 - MetaHuman Retargeting 9:11 - UE5 Sequencer Assemble 14:03 - Cam Op Sim Live Demo 17:49 - Conclusion #metahuman #mocap #ue5 #camera #operator
    WWW.YOUTUBE.COM
    MetaHuman MOCAP + UE5 Camera Operator Simulator
    Breakdown of using Vicon Motion Capture (MOCAP) and MetaHumans in Unreal Engine 5.5 to make a real time camera operator training simulatior. The app is called "Cam Op Simulator" and it's available on Steam for PC and Mac OS Silicon. 0:00 - Intro 0:40 - MOCAP Studio 2:27 - MOCAP Shoot 5:33 - MetaHuman Retargeting 9:11 - UE5 Sequencer Assemble 14:03 - Cam Op Sim Live Demo 17:49 - Conclusion
    0 Commentaires 0 Parts