Befores & Afters
Befores & Afters
A brand new visual effects and animation publication from Ian Failes.
3 people like this
269 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • How the gorge was made in The Gorge
    beforesandafters.com
    Behind the visual effects by DNEG in the film.The premise of Scott Derricksons The Gorge involves two charactersLevi (Miles Teller) and Drasa (Anya Taylor-Joy)finding themselves each responsible for the surveillance of a mysterious gorge, but on opposite sides of the chasm.For scenes of the characters in their watchtowers and around the top of the gorge, DNEG was tasked by production visual effects supervisor Erik Nordby for delivering the above-gorge environment. Production filmed the actors at Warner Bros. Studios, Leavesden. There was a strip of that pathway where Levi walks on the western edge, outlines DNEG visual effects supervisor Anelia Asparuhova. That pathway was practical, and the tops of the towers were practical, but everything else was CG. DNEG relied on LiDAR scans and texture photography from Norway to build the digital gorge asset. There was also some scanning that Erik Nordby did in Scotland that we used for the detailed level of the rocks, says Asparuhova. To build the gorge, we reasoned it would hypothetically be located in central Europe, and so we took ideas not only from Norway gorges but also ones in Greece, Turkey and Bulgaria. Thats why you see that the forests were mostly coniferous, but also with a few deciduous trees sprinkled here and there to suggest that were in that north central part of Europe.We even hired a geologist in the first few weeks, adds Asparuhova. We were really, really adamant to make sure that, even though we were driving it artistically, we didnt want to build things that couldnt exist in nature. DNEG also delivered digital environments for scenes when the characters are in the surrounding forest, including for an action scene involving drones. We had some scenes, for example, notes DNEG visual effects supervisor Sebastian von Overheidt, where Drasa is hiding behind a tree. Shes running through the forest. Thered be maybe two real trees behind her, but the rest is all full-CG forest. We had to figure out the forest ground, the exact species of the trees, the exact look and color of the leaves and all those things.In addition to the gorge itself, and surrounding foliage, DNEG was responsible for generating a thick layer of fog around the gorges top. For that, advises Asparuhova, we did a few tests of what the fog was supposed to look like. The fog is almost like another character in the movie, and we went through a few different levels of density. We wanted to make sure that it wasnt too flat because obviously that is just not something very interesting to look at. But we did look at a lot of canyons with fog in them just to get some ideas of what that could look like. We wanted to make sure that it had enough motion in it, enough movement to keep it alive, but not to distract from the rest of the action. We had to build the gorge walls deep enough for it to not end up with any issues and any edges, continues Asparuhova. They were quite heavy renders because we had the fog, we had the waterfall and other elements.Close-up views of the fog, and moments that required fog interaction, had their own challenges. For instance, Levi ziplines over the other side of the gorge. For those shots of Levi going over the gorge, says von Overheidt, you get quite unusual steep angles into the fog. You see the transition where the fog meets the rockthe fall-off. We would add extra simulations to make sure we got a nice fall-off and nice detail. Then later for the quadcopters that rise from the fog, we had a different kind of problem to solve for those, as we are looking flat across the fog and we are right on top of it. At one point, Drasa jumps down into the gorge (and through the fog) after Levis zipline snaps. For that, says von Overheidt, we had the base fog volume and then added additional cloud simulation inside the fog as shes jumping through.For the jump itself, Anya Taylor-Joy performed the leap and then landed on a crash mat. DNEG took over Taylor-Joys jump with a digital version of the character. We followed the animation of her jumping off with a body track and very close shot sculpting, explains von Overheidt. We then transition into the full CG moment, which means were decoupled from the matchmove and were able to do our own camera move as the dive into the wispy foggy clouds takes place. Once out of the gorge, Levi and Drasa are pursued through the forest by quadcopter drones, which were crafted by DNEG. We had a practical reference of a one-to-one size model of the drones that they really threw down the forest, says von Overheidt. This was a good reference for lighting and how much dirt got kicked off. There was also some pyro going off that we could use as a reference. We then created a full clean plate of that scene, and a full CG forest behind the scene, and then had our drone assets tumble through with full FX simulation. Drasa and Levis actions invoke the Stray Dog protocol that results in a nuclear blast, triggered by a series of smaller explosions. DNEG looked at various pieces of footage from nuclear tests and blasts. We had to tailor-make an explosion to fit into the gorge, describes Asparuhova. All the explosions we had seen had been happening on flat ground, but here we had to look into the physics of it happening in a gorge. The explosion actually happens below. Normally, you would get the explosion, then you would have the shock wave, and it would obliterate everything. Our challenge was, how do we do this several hundred meters under and still make it look realistic? It also included the destruction of the towers, adds Asparuhova, which we had to break into pieces and then blow them away. There were trees we had to animate, too. We had some really interesting footage from real trees showing how they bend and how they start smoking with the explosion. In the end, it was first a bunch of smoke that comes out and then the shockwave hits and literally bends and obliterates everything in its way.The explosion is a massive moment in the film, of course, but DNEG was also responsible for other much more subtle moments. Asparuhova identifies the times when Levi and Drasa are peering at each other through their binoculars. For those shots, there was this subtlety required in terms of, how do you not obstruct the story? We had to show that these two are falling in love with each other, but we still had to show it as them looking through binoculars so that the audience understands whats happening. Sometimes its the subtle things that you have to spend a lot of time and a lot of thought on, just so that its as seamless as possible. ScreenshotAll images courtesy of DNEG 2025 Apple Inc.The post How the gorge was made in The Gorge appeared first on befores & afters.
    0 Comments ·0 Shares ·8 Views
  • Watch these Kraven the Hunter VFX reels
    beforesandafters.com
    From Image Engine and Rodeo FX.The post Watch these Kraven the Hunter VFX reels appeared first on befores & afters.
    0 Comments ·0 Shares ·67 Views
  • Important Looking Pirates breaks down Skeleton Crew VFX
    beforesandafters.com
    Watch their new video breakdown.The post Important Looking Pirates breaks down Skeleton Crew VFX appeared first on befores & afters.
    0 Comments ·0 Shares ·64 Views
  • Pitch & Produce: AccuFace as the Solution for Rapid Film Production
    beforesandafters.com
    Geoff Hecht on his film, Love Is A Championship, and how he used AccuFace for animation. Check out the article and podcast.Geoff Hecht is a seasoned 3D and VFX artist based in San Francisco, California. A graduate of the Academy of Art University with a degree in Animation and Visual Effects, Geoff directed a team of 78 on the animated short Metro6, which was featured in nearly 70 international film festivals and nearly ranked among FilmAffinitys Top 100 Best Animated Shorts of 2020. Building on this success, he is now directing his second animated film, Love is a Championship (LIAC).LIAC is a short-form animated romantic comedy that creatively parallels sports with dating. The project brings together over 30 artists from six countries and features a cast of 10 actors known from Netflix, Paramount Plus, Amazon Prime, and major films. Under Geoffs direction, the film humorously likens penalty calls in sports to the awkward missteps of relationships. By mid-2023, LIAC had made significant progress in storyboarding and production development.Surpassing Traditional Facial AnimationFacial animation has traditionally been a time-intensive hurdle in filmmaking, especially with the challenges of precise lip-syncing. For LIAC, Geoff Hecht turned to Reallusion AccuFACE to streamline the process. This decision proved transformative, as AccuFACE enabled pre-recorded video to drive virtual character performances with a flexible workflow. By separating body and facial animations, the LIAC team achieved significant time and cost savings in production.AccuFACE: Transforming Facial Animation WorkflowIn December 2023, Geoffs team completed a motion capture shoot for body animation, postponing facial animation to February 2024 due to onsite limitations. AccuFaces support for remote recordings proved invaluable, enabling actors to participate via Zoom when in-person sessions were not an option. By removing the need for restrictive equipment, AccuFace allowed actors to deliver natural performances, enhancing the quality of the final animation.Efficient Facial Animation Production with Unyielding QualityReallusions iClone and AccuFace enabled Geoff to mash up the best takes and minimize the need for reshoots for LIAC. Its compatibility with industry-standard tools like Maxon C4D, Maya, Unreal Engine, and Blender ensured seamless integration across the diverse production pipeline that Geoff was already used to. With Redshift rendering and a paint-over workflow, the film took full advantage of AccuFaces adaptability and efficiency to overcome various technical hurdles.Aria Song recording her performances for AccuFACE.Revolutionizing Facial Animation for Modern FilmmakingFor Geoff, AccuFace wasnt just a technical upgrade; it represented a creative breakthrough. By cutting production costs, speeding up workflows, and preserving high-quality standards, AccuFace became a vital tool. As Love is a Championship nears completion, its use of AccuFace underscores how innovative technology can empower creators and transform the way stories are told in animation.Jenna Louie Lohouse performing real-time capture in AccuFACE.Additional InformationLove is a Championship is a passion project directed by Geoff Hecht. With a team of 30 people, Geoff understands that a systemic approach is essential to manage both the team and production time with efficiency and precision. Since its announcement in November 2023, Geoff and the team have completed the script, the full animatic storyboard, body motion capture, and several animation scenes.Lead Animator Michael Grassi and the team are proud users of Reallusion Character Creator and iClone. Having released several tutorials online, they invite you to join them on their animated filmmaking journey.Editing viseme in iClone AccuLips.OutreachBy subscribing to their creative channel and Patreon, you can gain exclusive access to behind-the-scenes footage that walks you through the tools and methodologies used to bring Love is a Championship to life. For instance, Geoff has worked closely with Noitom and proudly used their Perception Neuron Body Mocap for entire body movements. They put these professional voice actors into mocap suits, having them perform real actions from the script, and it all paid off.Aubrey Trujillo waiting for body mocap.Unlike traditional releases, where the audience must wait until the films completion to see the final behind-the-scenes video, Geoffs team is sharing these insights progressively, leading up to the films ultimate release. Geoff refers to this approach as Before the Scenes.Its a wrap! All the motions from LIAC is captured by the team.Brought to you by Reallusion:This article is part of the befores & afters VFX Insight series. If youd like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.The post Pitch & Produce: AccuFace as the Solution for Rapid Film Production appeared first on befores & afters.
    0 Comments ·0 Shares ·35 Views
  • Brand new challenges for a familiar bear
    beforesandafters.com
    How Framestore tackled fresh shooting methodologies for Paddington in Peru, while drawing on its rich history in character animation. An excerpt from befores & afters magazine.When Framestore embarked on its third Paddington filmDougal Wilsons Paddington in Peruthe visual effects studio did so with significant experience in crafting furry creature performances from the first two movies, and from a range of other projects such as the Guardians of the Galaxy films, His Dark Materials and Christopher Robin, among many others.Indeed, all the lessons Framestore had learnt over the years in creating such creatures would be drawn upon for Paddington in Peru, from animation, to fur grooming and simulation, and to rendering. Meanwhile, a significant change this time around was the setting. The earlier Paddington films were predominantly set in London and, accordingly, shot on London locations. Paddington in Peru was set, of course, in Peru, where Paddington and the Brown family head in search of a missing Aunt Lucy. However, it was not principally filmed in that country, as Framestore and production visual effects supervisor Alexis Wajsbrot explains.Some of the locations we wanted to go to were so remote that really only one guy could access the location after, say, six hours of crazy car and boat and train rides, that we decided very early on that we would shoot the principal photography with the actors in the UK at Sky Studios Elstree and at a couple of farms also in the UK. There were also plate shoots in Colombia and Peru, but the majority of the shoot was in the UK.This meant that, for exterior Peru scenes, the actors would largely be filmed on partial or bluescreen sets, and then composited into separately filmed background plates or digital environments. Looking to ensure the film remained as grounded as the first two Paddington movies, Wajsbrot says a concerted effort was made to integrate the foregrounds and backgrounds. We wanted to feel that the backgrounds were as real as possible, so we wanted to shoot as much as we could for the background in order to not feel like it was not a show with fully 3D backgrounds. Thus, a major location scout exercise was undertaken in pre-production, led by director Wilson and production designer Andy Kelly. They went to Peru and scouted loads of locations, says Wajsbrot, who worked with visual effects producer Nick King on the film. Another team scouted locations in Colombia. Then this fed prep work for storyboarding the whole movie and prevising and techvising, something Framestore Pre-Production Services (FPS) was a big part of.Wajsbrot identifies one particular sequence as a classic example of the shooting approach that Paddington in Peru took. This was for a scene of the Brown family on a boat on a river in the Peruvian jungle, as it encounters a series of rapids. The boat scenes were captured on a boat mock-up sitting on a motion base and filmed against bluescreen (the films special effects supervisor was Mark Holt). For that scene, not only did we do previs and techvis to know which speed the boat was going, we also used that to work out what height the camera would be when you are in the saloon or when you are above the deck. Then wed go shoot with a camera array in Colombia to give us flexibility for all the angles and heights. It gave us hours and hours of plate photography of the river, of backgrounds, of the beach and the rainforest.Previs.Plate.Final.Similarly, for scenes that take place at an Incan fort, production filmed plates at Machu Picchu (the tourist-heavy location meant that people would need to be painted out of the background plates). Moments that involved the characterswho were not filmed on locationrequired the building of partial sets, set-pieces or bluescreen setups in the UK that matched the background plates that had been shot.Furthermore, the many background plates went through a meticulous selects process. We came back from shooting in Colombia and Peru, recalls Wajsbrot. It was literally three weeks before we started main photography. The DOP Eric Wilson, Dougal and I locked ourselves in a room and we watched hours and hours and hours of footage of backgrounds. This was actually very hard because when you see a background, it can be anything. But, we had to select backgrounds because the process that we put in place with Eric was that we would pre-select all the backgrounds and then when we were shooting principal photography, not only could we live-comp the foreground with the background for immediate feedback on the camera angles, but also for making the lighting match.For those background platesto which Framestore would commonly be adding in a CG Paddington, in addition to other environmental or compositing worka classic balls and charts was also shot at the Peru or Colombia location. This was then replicated to some degree during principal photography, with Wajsbrot encouraging Wilson to shoot with a stand-in also holding a gray ball. It allowed the cinematographer to match the lighting in the real-time comp, and of course benefited the VFX teams later in post-production. Eric and I were able to look at the live-comp and make adjustments right away, perhaps tilting the camera up or down, says Wajsbrot. In the end, that was the gag, that it needed to look like we shot everyone over there, and I think we really achieved it. Dougal also has his very unique way of directing where he rehearsed everything with anyone, adds Wajsbrot, on the way the filming progressed. He shoots everything with what he calls his crap-o-matic, basically on an iPhone. So, you have a version of the movie somewhere out there that must be very funny where everyone on set is playing a role or different roles. Its actually very funny.Wajsbrot is particularly proud of a moment where background and foreground plates were able to be matched as two different drone shoots for a camera-crane like pullback on the family, who are on a beach. One shoot was a background plate in Colombia shot with a DJI drone and the other was a foreground element shot in the UK also with a DJI drone. It was, effectively, a motion control multi-pass done with the drones that relies on the GPS coordinates recorded by the drone to provide for accurate positioning measurement. What we wanted to do was record a plate in Colombia with the drone and then in the backlot of Elstree Studios, where we had a set, repeat that same motion, explains Wajsbrot. We worked with our drone team, The Helicopter Girls, to research how to translate all of the coordinates and offset them to where we were in London.The shoot at Elstree was not without some complications. Here, a beach set was built, with the Brown family also needing to emerge from the river, so a pool of water was also employed. The drone team rehearsed the drone pullback on the weekend and were able to show through live-comps that the two drone platesfrom Colombia and from Elstreematched up. Then, shares Wajsbrot, for the real shoot, they pressed action and the drone went down instead of up. Suddenly, the drone is underwater in the little pool! Unfortunately the drone was destroyed and they had to bring in a different one. We re-did the shot and it worked great. I think its a really good step forward in the world of how to do motion control in a simpler way.issue #28 Paddington in PeruA motion control shoot of a different, and more traditional, kind was orchestrated for scenes featuring Antonio Banderas Hunter Cabot character and Hunters ancestors, played also by Banderas. In the film, these ancestors appear to be talking to Hunter, and to be controlling him and driving him in search of gold. Explains Wajsbrot: Some of the techniques we used were real motion control, some were faux motion control, meaning that we had the camera operator redo a scene just by hand. In comp, we could stabilize it enough that it just worked. The most complex scenes were where Cabot is talking to himself, very closely. In that case wed use a double for the shot and then replace the face. We also used a different technique for where hed need to shadow himself, but hes the same person. Sometimes the performance was too long so we had to morph Antonio to make his performance shorter and then readjust the eyeline.Shooting the multiple Antoniosup to seven characterswas a challenge simply from a costume and make-up point of view. Each costume change was about four hours, says Wajsbrot. Sometimes you have all of their ancestors together in the same place, which meant we couldnt shoot them all on the same day. In that case, we had to split it over two days and ensure the motion control camera setup was in place. It was definitely a bit of a puzzle for our first AD, Nick Laurence, to put all of that together.The post Brand new challenges for a familiar bear appeared first on befores & afters.
    0 Comments ·0 Shares ·50 Views
  • And the VFX Oscar went toDune: Part Two
    beforesandafters.com
    Congratulations to all the nominees!The Oscar for Best Visual Effects at the 97th Academy Awards was awarded to Dune: Part Two (Paul Lambert, Stephen James, Rhys Salcombe and Gerd Nefzer).Here was the acceptance speech:Paul Lambert: Thank you once again to the Academy. Thanks to Brice Parker, Jason Garber, our amazing cast and crew, all my fellow haitre ds, producers, production, Legendary, Warner Brothers, DNEG, Wylie Co., Rodeo, Territory, the incredible MPC, and of course our visionary director Denis Villeneuve. To my wonderful wife Mags and my two boys Boston and Jackson, who are again sat up there in the roof. Thank you.Stephen James: Thank you to everyone at DNEG and all of their families, and this is dedicated to Linda. Thank you.Gerd Nefzer: [Speaks German] Denis, thank you. Big thank you to my wife and my family, my daughter Janna, my son Luca, my mother Elke, my father-in-law Karl, who brought me in this business 38 years ago. What a great evening.Here are the other nominees.Alien: Romulus (Eric Barba, Nelson Sepulveda-Fauser, Daniel Macarin and Shane Mahan)Better Man (Luke Millar, David Clayton, Keith Herft and Peter Stubbs)Kingdom of the Planet of the Apes (Erik Winquist, Stephen Unterfranz, Paul Story and Rodney Burke)Wicked (Pablo Helman, Jonathan Fawkner, David Shirk and Paul Corbould)The post And the VFX Oscar went toDune: Part Two appeared first on befores & afters.
    0 Comments ·0 Shares ·99 Views
  • How to animate a rat being smashedand do it backwards
    beforesandafters.com
    The intricate stop-motion and puppeteering crafted by Tippett Studio for Alien: Romulus. An excerpt from befores & afters magazine in print.A key story point in Fede lvarezs Alien: Romulus involves time lapse monitor footage of a rat being crushed in an experiment, and then shown to somehow regenerate itself. That footage included stop-motion animation and puppeteering orchestrated by Tippett Studio at their facility in Berkeley, California.Here, members of the team tell befores & afters about the build involved, and the complex nature of the animation project, since it required going from a crushed rat to a regenerated one, and a number of steps in between. Plus, they discuss lining up their animation with real rats as well.b&a: What was the brief here from Fede?Chris Morley (visual effects supervisor and director of photography, Tippett Studio): Fede is a big fan of Phil Tippett and practical effects and just really wanted this to happen. When he described the sequence he said, Basically, you guys are going to smash a rat and then the rat regenerates. And thats it. So it was very, very basic direction and he trusted us from the very beginning. He knew that we had worked on other stop-motion stuff. We also have a rich history of visual effects, so we know how both can work together seamlessly and efficiently.What was really cool was he gave a brief explanation of what the scene was without getting too detailed into the story. He just mentioned that it was kind of a pivotal storyline to the whole picture. So that was, like, Okay, thats some pressure. This is going to be really cool.They had some postvis, so it was really clear what they were trying to achieve, and then it was just about all of us. Weve worked together for over 20 years, a lot of us, so it was just about putting our heads together to figure out how we were going to do this.Lisa Cooke (visual effects producer, Tippett Studio): When the brief came in and it said, Smash a rat and it comes back to life, I had no idea how we would do it, but knowing this team, I knew they would figure it out. Disney was very particular. You cannot harm a rat. You cannot even use any piece that was part of a real live rat, so the rat puppet that we created was completely built by hand.Chris Morley: Yes, Disney has a special soft spot in their heart for Rodentia, I dont really know what thats aboutb&a: To work out how to do it, what kind of conversations did you guys have about how this would be done with stop-motion and puppeteering?Chris Morley: Well, usually well just think about ideas first. Well say, Lets visualize the final piece. Okay, smashed rat regenerates. We have to do it stop-motion, so were not going to take a totally smashed rat and stop-motion regenerate it, were going to shoot it backwards.This is where we all have different ideas and then we come together. Then people like GibbyTom Gibbonshas so much experience with stop-motion because his history is in stop-motion before digital stuff, and so Gibbys usually a voice of reason of, Heres how it could work.Gibby, I know there was a lot of back and forth with what stages to shoot when. Do you want to talk about what we came up with in terms of what would be the best scenario for the regeneration?Tom Gibbons (lead stop-motion animator, Tippett Studio): When we first started talking about this, I remember we ran the gamut of ideas, one of them being a rod puppet. I remember we were going to do a rod puppet or we were going to do strings. We were going to shoot it at speed, basically, and see what performance we could get. And after talking that through with some of the other ideas, I remember I was pretty adamant about not doing that, but it was a great idea. It was still like the whole arsenal of old school tricks to figure out the best way to do this was on the table. And we rolled back around to stop-motion because it was the most controllable in the sense of, we could go slow, and we could make all the shapes potentially in the poses that we wanted to as we went from smashed rat to real rat.The look was intended to be similar to when animals die and then are being cleaned and decaying. They have that time-lapse look, even as they gas up and then collapse in on themselves. So the idea was we would reverse , and that kind of weird time lapse as adding two knots that you can get where the animals, they gas up and then they just collapse in on themselves.I tried to think of different ways to do that and I went down some bad rabbit holes. It became a combination of all the bad ideas coming together to make a single skin that we could use to have the rat be a rat in the very beginning and then basically just kind of crush it down by using foil and wire and all sorts of just weird, silly things. We even had push rods. So, if I collapsed the rat too quickly, we installed push rods shapers underneath that I could basically push the skin back out again to claim the shape that I wanted to get to.Chris Morley: It was all about the fabrication of this rat because thats what truly makes or breaks the technique, and coming to the conclusion of, This is what we need. We dont need a crazy machined armature or anything, we need pliable surfaces that we can plump out and also dent in. Thats when JD and Mark came in to help with the fabrication and the design of this rat.In fact, we originally got a rat sent to us that Alec Gillis had made. Thankfully they sent faux pelt. That was a big concern for us because we wanted to get a real taxidermy rat pelt, but that was out of the question because Disney said, No animal products whatsoever.John JD Daniel (puppet fabrication, Tippett Studio): Yes, Alec Gillis sent us this amazing and beautiful rat model of a white rat, but it was based off of a sewer rat. We actually had two live rats that were in our studio that were used in the beginning and the end of the sequence and we had to match those absolutely. Lab rats are usually kind of small and round, and so we needed to take this amazing rat pelt and all of these materials and completely strip it down to all of its parts. We literally recreated a new rat to match our hero live actor rats.What was wonderful was that, along with the puppet itself, Alec Gillis sent extra pelt on wax paper, and some extra bones and organs, and we used every part of that. It was wonderful. But then basically the entire skin was pelted and then foil was placed in so we could animate with it, and then all of its skull had to be reconstructed and new eyeballs added in as well from a local glass artist.Mark Dubeau (art director, Tippett Studio): In addition to building a rat, we had to create this entire set around the rat. Production sent us palette after palette of props and sets and tables and whatnot from production. It was actually kind of stunning to see all the stuff show up and then we had everythinglab coats, microscopes, syringes, you name it. We had to go in and match in terms of not only the look of the way everything was set up on their stages over in Budapest, but also in terms of the lighting, I know Chris and Ken spent a lot of time trying to make it feel like it was set in the same world. Even though in a lot of cases we were seeing it on a monitor, you still wanted it to feel like it was in the same place.Chris Morley: It was totally awesome. It was about a 10 feet long bank, so we were able to make a counter and it had a backing, and then we had all of the props set up that would be slightly off screen. In the end, it was quite simple to match the lighting because there was a very soft top light. We would do little things like making sure the little W logo for Weyland was somewhere in frame so you could see it. We lit and shot everything to hold up on a big screen, but I just love the fact that this was going to go on a CRT monitor and look aged. We didnt hold back on the detail and the precision, even though it was going to be in a CRT monitor. It just makes it that much better.Mark Dubeau: It was surprising when we got the enclosure that the rat is in. They had actually engineered that so it would go down with a motor and it could potentially crush the rat for real. We had to go in and put a lock nut in there just so that there wasnt any accident that happened with the real rat. We wouldnt have been able to stop it. It just went down like that when you gave it power, so we had to make sure that we didnt wind up accidentally killing a rat.b&a: What were the materials that you were using to really sell the gore of it, especially the insides of the rat and the muscles and that sort of thing?Tom Gibbons: The kit that we got, it was not only fur shaped like a rat, but if you took the skin off, it was what we called the meat rat. It was literally just a muscled, sculpted version of a rat out of silicone. I cut pieces off of it in the shapes that I wanted so that when I broke the skin I could push little mounds of meat and bone through it. In order to get that stuff, its jelly and Vaseline and meat silicone and little plastic bones. I shaped the skulls and the ears out of aluminum rubber.I was given the restriction of not making it too splattery or gory, so it only happens in a couple of very isolated places. Chris did a pass on it to amp the bloodiness of it up to taste without getting us in trouble.John JD Daniel: Tom also rebuilt the entirety of all of its little claws and then painted it up with latex to make these wonderful little animatable hands.Tom Gibbons: Yes, the one thing that we didnt get from the original puppet was feet and hands. They were more or less little rubber paddles. They would hold up on film, but they just wouldnt in animating, so we remade them. Fortunately, that kit that they sent us was awesome. We were true scavengers. We used every bit of that thing for our own.Chris Morley: And, as far as orchestration and paths forward that were revealed to us, there was a lot of, Lets do a little of this, then we have to skip to this and then we have to go back to this, because we were dealing with live animal rats.Lisa Cooke: We had to have a rat handler and a rat safety monitor to watch the rat handler. We had two rats, Grayson and Sneaky. They even had their own green room.Chris Morley: In terms of the real rats, so, we had this machine, and it had this big waffle iron thing that smashes down. So, as JD mentioned, we had to rig the machine with a screw so it would not fall down at all while a real rat was in there. The thing is, we had to get the plate of the post-rat regeneration first because we needed an exact pose and position to give to the fabrication team; Gibby, JD and Mark. Thats why you hear Gibby talking about resculpting the ears, resculpting the hands. It was all just to match that single frame. Then as soon as wed line up to that single frame or line up to the live action, it would cut to the live action. The way that Fede designed the shot was supposed to be time-lapsed, so there were nice skips and jumps which made it very easy to be able to blend it back into the live action.I remember we had the camera set up from the day that we shot the real rats in there, and we locked that camera and left that camera for the stop-motion shoot with a barrier around the camera so that no one would kick it. The focus was locked, everything, and we used that to constantly go back in with our fabricated rat and A-B it with the last frame of the practical rat until their silhouettes matched.Mark Dubeau: Werent the rats chosen because one of them was kind of cute, fuzzy, and young looking, and then the other one was rougher looking?Lisa Cooke: I think we got lucky. Grayson had a really bad hair day and Sneaky was more together, so one of them we could use before the smash and the other one could be used after the smash.Chris Morley: It was serendipity. I love that stuff. Its happy accidents. One rat happened to be super gnarly. Thats the one after it regenerates because it just looks a little fucked up. Thats the kind of magical thing that happens in our favor when the film gods are on our side.Mark Dubeau: He was a heavy smoker and had a drinking problemb&a: Lets talk about the stop-motion shoot. What can you say about how that was approached?Webster Colcord (stop-motion animator, Tippett Studio): I think whats interesting was that it was a very linear process because you had to shoot the live action first for where the rat would end up. And then Gibby animated backwards, and then I animated backwards from where Gibby landed with the smashed rat to show the still barely alive smashed rat, and then we went even further backwards to shoot a little bit of live action puppeteering.Chris Morley: In comp we did a split line for the smash. It went from real rat right here, then smash, with the split line in comp to cover. Then when it lifted back up, it was the stop-motion rat, well, it actually wasnt the stop-motion rat. It was the same model that we were going to use for stop-motion, but we did a little bit of rod puppeteering where Webster and Melissa came in and we shot that one live action. We were shooting with the R5 camera, so we were able to shoot 8K live action with the same aspect ratio as the stop-motion frames. It was super cool to be able to shoot both movie and single frames from the exact same camera. That allowed us to do all the little tricks, like the little twitching with the rods.Webster Colcord: The thing to remember is that there was no going back because Gibby and JD made the beautiful pristine rat puppet, and then Gibby reverse animated, smashed it with all kinds of stuff going on in between, and then I shot a little bit and then cut it apart to make it loose enough so we could rod puppet it. Really, it was a backwards linear production flow. Totally unlike our CG projects where we bounce around and do different stages and go back and change things. We couldnt here. This was linear, practical.b&a: And Webster, just on that live action puppeteering, what kind of access did you have to the casing it was in?Webster Colcord: There was very limited access. Luckily, Melissa is tiny.Melissa Claire (puppeteer, Tippett Studio): I have a very small arm that I could just jam in there.Webster Colcord: So, Melissa was doing the head of the puppet and I was just doing the breathing. What was cool too was that Chris compd in the fluid. Meanwhile, Ken was playing the Weyland-Yutani mad scientist.Melissa Claire: Kens hand might be in the movie, right?Chris Morley: Kens hands definitely in the movie, and Ken we enlisted to do the injection on the rat. In that shot where you see the gloved hand coming in and injecting the black fluid into the rat, thats a combo of Webster, Melissa and Ken all at the same time with a live action shoot.Ken Rogerson (visual effects editor, Tippett Studio): And just to be clear, the access in that scene was very restricted because Im leaning over Webster, whos half hanging out of the set, and Melissa has climbed down underneath the set and is reaching up blind, puppeteering from below. And Chris was over by the camera, so were all four people in about three square feet of space, I think.Chris Morley: I was watching out for their feetDo not kick the tripod. The camera had been set up for two weeks!Ken Rogerson: At that point, we were super committed. We had to follow through on that inverse pipeline where, when you got to the end of one process, that became the first frame of the next process, down to this last shoot, the live action injection.Chris Morley: It was quite an editorial undertaking because the way we had to shoot it backwards and forwards and split it up and everything, it was really great for Ken to just bring it in editorial and figure it out. This was planned and it all worked, which was great, and then it just required a little bit of compositing in there. I love doing compositing as well and was able to just blend the stuff together and make sure all the trickery was working.Webster Colcord: I think we should have Gibby talk about the sheer nerve it takes to not animate something smoothly. Gibbys style has become the studio style and his animation is beautiful and flows with great arcs and overshoots, but this was meant to be time lapse.Tom Gibbons: It was antithetical to the way that you would normally do stop-motion, and the part that I was trying to capture that was the hardest mental leap was that if you watch animals decay over time in a time lapse thing, they dont just deflate like a balloon or something. The air doesnt go. They have the tendency to collapse and then all of a sudden, parts of them will start to all of a sudden grow back out. Thats what it looks like because other parts are deflating quicker. Its like a bad souffl, when a bad souffl just drops.So, the idea of being able to try to take this rat down in ways that wouldbecause again, I had to work backwards. It wouldve worked way better mentally if Id been able to animate from the smash up into a real rat because your brain just wants to work forward in time. It doesnt really want to work backwards, so trying to get that weird stuttery step look, but then also do it in reverse was a complete mind fuck.Webster Colcord: One thing I used on the breathing shot where the rats just breathing, because its so delicate, was just a brush to move the bits of fur for some frames because it was so touchy and subtle.Chris Morley: A lot of times we do stop-motion, we also do a little bit of previs. How much previs did we do, Gibby?Tom Gibbons: The one advantage I had that I thought would help much more than it actually did when I started shooting was I just took one of our old CG rats and I worked with our rigger to give me the freedom to scale it down in weird ways and do an animation from a crush up into a real rat so I could try to get the dance right of that, and that helped a lot.I did that for a week or so before I actually did the real stop-motion just so that we could see the shape. That way, when JD and I were making it, we could try to understand that we wanted the knee joints and the hips and the pelvis to be able to come out of the skin in a way, that is, poke sharp angles up. Again, it was a lot of great ideas, and in the end it was just close your eyes and just start stabbing it and hope that it looks good.Lisa Cooke: I just wanted to add that the rat puppet was one of the most laborious puppets weve built. From what we were given, to what the team created, it was a very exacting process and they did a beautiful job. I was saddened that we had to smash it because it looked so good.Ken Rogerson: When we look at the live rat in the split frame where we matched the stop-motion rat up to match the last frame or first frame of the live rat, you could barely tell a difference. Chris did a little bit of comp work to blend that, but it almost didnt need it. Really, the stop-motion rat stands up and becomes the live rat. Its pretty amazing.Chris Morley: The thing that allowed us to really capture that feeling of the broken souffl that Gibby was talking about was the fact that it was the faux pelt and on the underside was basically tinfoil to where we could poke sticks and have sharp corners and then poke sticks to soften it up. It allowed Gibby and Webster to get the right pose every time. The tinfoil under the body allowed us to do that. A huge part of that rat build was getting that tinfoil. And I remember JD would have the pelt out and it looked like a bear rug.John JD Daniel: There was only so much extra pelt that I had. I had the one main pelt and some extra. Once I made cuts, there was no going back. This was the prop. I would check all of the rat. I had Gibby sculpt rat shapes so we could confirm that this would fold into this and then okay, here we go, gluing it and then youre done. So there were no multiple rats. There was one rat that we built.Tom Gibbons: A couple of things I would just mention that were a surprise to me, but it is so obvious now, is that in order to get the live action and the stop-motion characters to match, I eventually just sculpted a clay rat that was a fully clay rat, and I used this rat skull for the head because it was the right proportion, and it was probably the only real thing that was a given one-to-one. We knew that the rat skull fit inside a real rat head, and the rest of it was just made up. So we did a rat out of clay that we used to then take the pelt and just shape it across the top, and the first time we did that it looked great. But then I realized that was like stretching a balloon skin across something. Which didnt work, because rats, theyre little bags of bones inside little rugs, so theres all sorts of skin fur that you just never see.issue #22 Alien: RomulusYou can take a rat and flatten it out and youre like, Holy cow. Thats a lot of skin, that you never see because theyre always balled up into these little cute snowballs. So, we winged out the arms and the legs so we could get all the skin in there so that when he did start to collapse, the body as it collapsed wouldnt just pull the legs in with it because there was extra skin.We had only one almost full pelt left over from Alec Gillis. I used one full pelt in the shoot, then as the rat collapsed and I realized I needed a rib cage to come out or a pelvis, Id go cut new pelt and Id blend it into the rat as it was going down in order to get the shape that I wanted, because there wasnt enough granularity in the foil or the wire to get those shapes. So, I was just sculpting our rat down and then building another rat on top of that rat with just the one leftover pelt.The one thing that we built that we didnt get to put in the movie, was that we were going to pop an eye out onto the cheek and I made a little glass eyeball. I put an optic nerve on it and I built this whole armature and miniature-like clockwork inside the head so that I could pop the eye out onto the cheek and then reel it back into the head. But we never got to use it.Chris Morley: We did bulge some eyes digitally.Tom Gibbons: But no eye pop. I thought it was going to be in there, but it didnt make the final cut.Webster Colcord: I came on the stage after Gibby had shot the big shot with the rat reforming backwards, and you can see what an artsy craftsy project it was because it was just like a big childrens art studio. It was so much different than the usual thing we animate, like a big robot with ball and socket joints and all that. Instead, it was just sculpting in front of the camera.b&a: What was it like working with Fede?Ken Rogerson: That relationship was really great on this show. Sometimes were fairly distant from the director and working through a lot of supervisors, but he really took a lot of interest in what we were doing. He gave really great direct feedback and information about what he was looking for.Chris Morley: He was super positive and loved what we were doing. He actually asked if we could shoot a few more inserts. We were all game for that and shot some more inserts for him that I believe he took to another company. In the last rat shot in Romulus, you can see the after effect of the black fluid. It starts bleeding and then you see its back tear open. That was taking some footage that we shot, and then I believe another company did the digital gore for that. That was really cool that he trusted us. He knew from the work that we did that we were fully capable of hooking him up with what he needed, and he was just really thankful for what we did.I try to be a huge proponent of using stop-motion as a viable means in the modern visual effects process, and we are fortunate enough at Tippett to have such a rich history where we have a lot of pretty exclusive companies coming to us for stop-motion work, even though were predominantly a digital effects house. We have the stage, we have the knowledge, we have the stop-motion animators, and it takes filmmakers that are willing to take that gamble of not having 100 takes of something and just embracing the realism, the tactile nature and craftsmanship of stop-motion animation. Fedes definitely one of those. Rian Johnsons one of those. Jon Favreau and then all the people on his team, Doug Chiang, John Knoll, theyre super champions for the stop-motion process.The thing that makes Tippett a little more interesting is that, yes, we love stop-motion, but were not purists at all. If theres any digital stuff that we can do to help augment the stop-motion to be even more of a viable visual effects process in this modern era, we have no problems doing that. We actually love doing it, addingmotion blur, re-timing, mapping textures onto stop-motion things to help them blend. We do it all and its really wonderful because its photographic, its moved by humans. Even if you re-time, it has that foundation of craftsmanship in there and were just very fortunate to be part of that process.b&a: Ken, Im curious whether you got to keep the yellow lab coat or whether you had to send it back?Ken Rogerson: Oh, it was made out of literally just latex rubber and if you wear that for five minutes, you never want to see it again.Chris Morley: Theres some Ken DNA in it for sure.Ken Rogerson: It was heavily corn-starched but still, it was designed to look good on camera, not wear well.Chris Morley: You wore it well, you looked good in it.Lisa Cooke: Everything was boxed up and went back to Hungary, unfortunately.b&a: Any final observations about this project?Lisa Cooke: Disney takes their animal welfare very, very seriously. Kudos to them. And, because of that, this project had a happy ending, Grayson and Sneaky were saved from a feedlot and they have both found forever homes.Chris Morley: Not in a snake belly.The post How to animate a rat being smashedand do it backwards appeared first on befores & afters.
    0 Comments ·0 Shares ·116 Views
  • How the Celestial Island scene in Captain America: Brave New World was re-worked with previs and VFX
    beforesandafters.com
    Heres how the VFX team helped design the final sequence. Plus, how they made Harrison Fords Hulk, and a character that audiences might not realize involved plenty of digital effects work.When Captain America: Brave New World production visual effects supervisor Alessandro Ongaro came on board the Julius Onah filmtaking over duties from fellow production visual effects supervisor Bill Westenhofer who moved on to work on the live-action Moanaone of his major tasks was helping to shape the Celestial Island sequence.This sequence ultimately saw Sam Wilson (Anthony Mackie) and Joaquin Torres (Danny Ramirez) intercept two mind-controlled American pilots who attack the Japanese fleet near Celestial Island, which has been revealed as a source of the valuable metal adamantium.As the director has recently explained, the Celestial Island scenes were re-imagined from being on the island, to the confrontation now showcased around it. Furthermore, an aerial dogfight originally featuring several nations was condensed down to just include the US and Japan.We decided to simplify it, reveals Ongaro, because you always want to favor the story, making sure that the audience enjoys the scene. For the sequence, we started over with the Digital Domain previs team. Cameron Ward was our supervisor and we basically plotted the scene as beats. So, we needed to have some key moments, one of those was that we knew that Joaquin had to be hit at some point.Captain America/Sam Wilson (Anthony Mackie) in Marvel Studios CAPTAIN AMERICA. Photo courtesy of Marvel Studios. 2025 MARVEL. All Rights Reserved.We werent sure if there was going to be an accidentsay, if he was shot down by a Japanese plane by accident that was catching fire, continues Ongaro. So there was some exploration there. And then on top of that, we have Celestial Island which is massive. We actually took the asset from The Eternals, and just scaled it down a little. Otherwise, it would be so massive that we wouldnt be able to cover the whole section. It was still huge, but it was scaled down. We used that in our favor to try to give a sense of the geography of where everyone was in relation to each other.One aspect of the revised sequence that Ongaro had to consider was fitting it into the political thriller style of cinematography that pervades the rest of the film. Julius liked locked-off shots and a lot of negative space. The original version of the scene was shot in the same way, but its an action scene. It wasnt working as well. It was hard to keep the energy up, the intensity.Captain America/Sam Wilson (Anthony Mackie) on the set of Marvel Studios CAPTAIN AMERICA. Photo by Eli Ade. 2025 MARVEL. All Rights Reserved.So, relates Ongaro, one of the first things I did was talk to Julius and say, I understand your style and Im going to try to marry your style, but we need to move that camera. We need to make the shots more dynamic, but as realistic as possible. As in, if we were shooting the aerial sequence for real, how would they shoot it? It would be from another plane, or from the ground with a long lens. So, we tried to stay as grounded as possible. I mean, there are two humans flying. But the speed was correct, for example, the plane was flying around 500 or 600 miles per hour which meant that Joaquin and Sam were flying fast.Digital Domain (visual effects supervisor Hanzhi Tang) was also responsible for the final visual effects work for the Celestial Island confrontation. For plates, principal photography included scenes on the warships, cockpit shots and then face close-ups of Mackie and Ramirez. We had previs that was really close to the final, notes Ongaro, and I really wanted to capture as many face plates as possible so that we didnt have to go full CG all the time. We blocked the scene out based on lighting direction. So, instead of putting them on wiresbecause they hated to be on wires and they couldnt move on wires, they had a very hard timeI just had them standing looking up. We figured out where to place the camera, where to place the light, and we tried to match the action that we had as much as we could. Things then always change a little bit during production which meant there were some shots where we had to go digital for the faces. Everything elsethe clouds, the oceanwas fully CG.Captain America/Sam Wilson (Anthony Mackie) in Marvel Studios CAPTAIN AMERICA. Photo courtesy of Marvel Studios. 2025 MARVEL. All Rights Reserved.The VFX studio developed a new cloud tool to aid in creating the environment in which most of the battle takes place. From a DD press release: Digital Domains team created a new Cloud Shader, a task that took nearly three months to perfect and enabled them to produce stylized clouds that matched the look and feel of the film. For this project, they also simulated clouds to achieve a cotton-like look and feel, layering up 4 to 6 clouds at a time to build depth and establish the appropriate distance.For the moment Torres crashes into the ocean, Digital Domain needed to both previs and then execute in final VFX a heavy emotional beat, as Ongaro describes. It is emotional, and its also a moment for Sam where he realizes later, he says, I should have taken the serum. Hes blaming himself for what happens. What we tried to do was make it Joaquins inexperience that causes the accident. He wants to impress Sam and so he goes after the second missile that he missed, but he gets too close. Im really proud of how that whole scene turned out, in the end.From Harrison to HulkAt a press conference in the White House Rose Garden, US president Thaddeus Ross (Harrison Ford) transforms into a red Hulk, the result of him taking gamma radiation-infected pills orchestrated by Samuel Sterns (Tim Blake Nelson). Early visual development for this red Hulk carried over some of Fords facial traits, with the final CG creaturecrafted by Wt FXexhibiting even more familiar features of the actor.Red Hulk/President Thaddeus Ross (Harrison Ford) in Marvel Studios CAPTAIN AMERICA: BRAVE NEW WORLD. Photo courtesy of Marvel Studios. 2024 MARVEL.There are parts of the Hulks face that are literally Harrison Ford, details Ongaro. Their eyes are one-to-one. The lips were all modeled after we scanned Fords teeth as well. There was some adjustment on the nose, to bring it up a little to create a larger area under the nose.I think hes one of the most complex characters that Wt FX has ever done, continues Ongaro. Its a brand new character rig with muscles that are firing automatically, based on the motion of the body. The asset itself is really, really sophisticated. Visual effects supervisor Dan Cox and animation supervisor Sidney Kombo-Kintombo at Wt FX were responsible for the phenomenal previs, postvis and animation of the red Hulk.Red Hulk/President Thaddeus Ross (Harrison Ford) in Marvel Studios CAPTAIN AMERICA: BRAVE NEW WORLD. Photo courtesy of Marvel Studios. 2025 MARVEL. All Rights Reserved.To start the process for this CG Hulk, Wt FX relied on capture of Ford wearing a head mounted camera for facial performance. Body motion capture was acquired on set with a stunt double performer, who also wore an extended head-piece for eyelines. The visual effects studio then largely carried out fight and other action beats at their own facility in Wellington. Of course, says Ongaro, so much of the Hulk was keyframe animation to get the weight correct.For the break out of Ross into the red Hulk, visual effects artists did look to reference from An American Werewolf in Paris to show the hands and feet bulging and transforming. Originally, advises Ongaro, we wanted to go really down that path to make it a little more disgusting. You could see the skin tearing and breaking on his hand, bones popping off, but we just condensed it a little bit.(L-R): Captain America/Sam Wilson (Anthony Mackie) and Red Hulk/President Thaddeus Ross (Harrison Ford) in Marvel Studios CAPTAIN AMERICA: BRAVE NEW WORLD. Photo courtesy of Marvel Studios. 2025 MARVEL. All Rights Reserved.After Wilson and Ross, as the red Hulk, battle it out at Hains Point, Ross transitions back into a ragged human. Originally, this moment was supposed to happen in front of camera, but was ultimately hinted at more as a shadow as the transformation occurs. We actually had a fully built asset that could morph from Harrison to the red Hulk, states Ongaro. But we didnt really want to do it too much in your face. For Julius, it was very important to keep that moment very peaceful at the end of everything. He thought that if we were doing a transformation in-camera, it wouldve broken that sense of peacefulness. So thats why we just played off his shadow.The red Hulks breakout sees him tear apart sections of the White House and confront two drone helicopters. Its the White House, so we couldnt completely destroy it, so we had to decide which part to destroy, notes Ongaro. Also, if you look closely, theres always people running away. We wanted to make sure that we were not killing anybody. In fact, originally, there were supposed to be two Black Hawk helicopters with pilots, but, again, we didnt want to kill anybody. So, they are drones.Captain America/Sam Wilson (Anthony Mackie) in Marvel Studios CAPTAIN AMERICA: BRAVE NEW WORLD. Photo courtesy of Marvel Studios. 2024 MARVEL.Invisible effects on SternTim Blake Nelsons ultra-intelligent Samuel Sterns is revealed in the film to have part of his brain protruding from his headthe result of his exposure to Bruce Banners blood. While some prosthetic make-up effects were employed on set, the final look was an all-CG approach achieved by Luma Pictures.We tried to stay true to the comics, remarks Ongaro. There, its the left side of his face that gets infected. Thats why we created more bulges on that side. The eyes have a kind of cataract. That was actually my idea where I said, We should add that little detail so it looks like the one side of the face is more messed up than the other. We replaced everything including the hair, and we touched his face as well to add digital make-up. Im very happy how it turned out because we really tried to make it as grounded as possible. Its another cool character.The post How the Celestial Island scene in Captain America: Brave New World was re-worked with previs and VFX appeared first on befores & afters.
    0 Comments ·0 Shares ·106 Views
  • The cross-over between production design and visual effects
    beforesandafters.com
    How it worked on Captain America: Brave New World, with production designer Ramsey Avery.Here at befores & afters, we dont often get to discuss how films are made with the production designer. But, we were given the opportunity to interview production designer Ramsey Avery about his role on director Julius Onahs Captain America: Brave New World for an insight into his art department on the film.In this interview, we discuss coming up with a certain look for the film, the cross-over between production design and visual effects, designing sets (including making scale foamcore models as part of set development), and the idea of dressing sets before and after destruction.b&a: Im always interested in the first conversations you may have had about Brave New World and the look of the film. What do you discuss with Julius, the director?Ramsey Avery: When Julius took on the project, I think he pitched it specifically as the version he wanted to do of this story, which was to take it into the world of 70s political thrillers. Marvel, in particular, wanted to find a way to kind of bring a sense of groundedness back into the MCU. And this particular story was also a story that was really embedded in a sense of a real person struggling through a real issue; Sam is not a super soldier. He has to figure out how to be a superhero without having that serum and that special strength. And so Julius thought that it needed to be grounded in a much more kind of visceral world. He went back to the 70s political thrillers, movies like Day of the Jackal, Le Samoura, The Parallax View. These had a very specific point of view, in terms of the way that they were shot and designed.Color key frames.Color script.I mean, Le Samoura is very stylized, but the other movies are much more naturalistic. But even in that case, he also looked at some movies like Trance and The Killing of a Sacred Deer, which are also slightly stylized, but are definitely within a sense of reality. And what was important about movies like that is those movies are trying to get at a sense of controlwho is controlling the story and the narrative within the action of those movies? And so you look at those older movies, you look at the more contemporary movies, and all of them have a real solid stance, in terms of, even though theyre realistic in many cases, the design is very specific.Julius described it as meticulous design, and by that, he meant choosing very specific locations, designing very specific sets, where you could frame the characters within a specific lighting, with a specific color, and with a very strong point of view, to suggest either somebody was watching, or reflection was happening in some place that told you something about how the character was considering their place in the world.Director Julius Onah (left) with production designer Ramsey Avery.Or, just simply that, in the storytelling, it had a very kind of graphic clarity that we wanted to convey. That was kind of the basis of the discussion, is that sense of, How do we place Sam within his overall story arc as this real person with not superhuman abilities, trying to figure out how he can be Captain America, while a whole lot of political intrigue is actually going on as part of this story that he has to contend with?b&a: Thats fascinating. What ended up being, say, one example of a particular location or set piece where you were able to fulfill that vision? I mean, apart from the whole film, but one particular example?Ramsey Avery: Well, thats interesting. I think there are a couple of places where that kind of thing pops out. Yeah, it is all of them, but because thats something that we had to work with the DP (Kramer Morgenthau) with in every case, like, Can we get the shots that were looking for in this location? In some places, when youre dealing with the White House, its the White House, so you cant change the idea of the White House, but you can say, Heres the Rose Garden and heres the lines of the Rose Garden. So, how are we going to set up the way that we film the Rose Garden to get those strong lines of that shape, to help us focus on the characters where we want to focus on the characters? What are the angles we pick?Since we had to build that setbecause we werent going to really shoot it at the Rose Gardenwe had the opportunity to pick, How are we going to emphasize, to tell this meticulous design and focus the camera on our characters where we want to focus our design on the characters? Some of that had to do with how we changed the plantings in the Rose Garden, and the shape of the hedge work, and where we put colorful trees, and where we took out colorful trees. All of that was about how to get to something, Heres a real world thing, but we still have to shift it and adjust it to make it cinematic in our movie.b&a: Tell me about how a set piece like the Rose Garden actually emerges? Does it of course become a conversation first and then become concept art, or do you dive straight into some sort of 3D sketch-up type things? Tell me about the evolution of the Rose Garden set.Ramsey Avery: Well, the first thing is research. We try to get as much research as we possibly can, and contemporary research of anything at the White House is very hard to come by. They dont have a lot of photography, but you can look at press conferences, and there is some candid photography of various presidents in the Rose Garden, so you can pull some of that to find some references. I was lucky enough to work on White House Down, which actually had been done before they started closing off a lot of the access to photos in the White House, so I did actually have some pictures from that that helped inform this.So, it starts with the research, to try to figure out what parts and pieces, again, are important to the storytelling. And then, from that, we do a model. We do a digital model, and we work out the general shape language. We ask ourselves, Are we going to make it as big as the real Rose Garden? And, working with the model, we decided that we wanted to make it the exact same size as the real thing. The same number of columns, same general layout as the real one. It just felt like thats the space we needed.Of course, sometimes theres budget saying, You should make things smaller, and theres also the whole conversation of, Whats going to be visual effects and whats going to be real? In the digital model, you can start to suss out what those components are. It lets you work out, Do you build up to just one part of the corner? Do you build the frieze above it? You can start to sort all of that out in the digital model. But that only gets so far. Even today after weve been using digital tools for a long time dont necessarily know how to really look at a digital model and really see it.So, after we took the digital model and we approved that, then we went through a set design process, where we drafted it all out. And from that, we built a physical model in quarter-inch scaleit mightve been eighth-inchso that we had this physical model that everybody could stand around and say, This person moves here, and this person moves here, and we can put the camera here, and then we need a big light over here, and heres where the bluescreen needs to go, so that we can make sure that we can extend the sky beyond it.And while were looking at that physical model, we ask, What shots do we do that keep it as much in-camera as possible? Because we dont want to make it visual effects unless we have to. Clearly, when big destruction is happening, theres some things we can destroy, but destroying the whole thing is something thats not probably going to be practical, because we probably want to have a take two, right? So if you destroy the whole thing, its really hard to go back to take two.So, theres some elements that we want to do in visual effects, but generally, we want to use that practical model to say, Okay, we will build to here and well build to here. And if we keep the camera in this area, then we keep it in camera, and we dont have to do visual effects. It looks more real, and it doesnt spend the money in visual effects. And wheres the balance of whats real, whats versus visual effects to make the most sense out of everybodys time, money, and schedule?b&a: You mentioned the interaction there with visual effects, and Im curious about those conversations you may have had, perhaps it was with Bill Westenhofer at that point, before Alessandro Ongaro came in.Ramsey Avery: Well, again, it came down to, What size should we build it? and we decided, partly to minimize the visual effects, that we should build it in the real size, because a movie has a certain amount of budget, and whether you spend the money in visual effects or whether you spend the money in practical, youre still spending the money, so wheres the wisest place to spend it? How many visual effects shots are you going to drive if you bring the set too low or you build the set too short?So that conversation happens with Bill and with the producer, Yasamin Ismaili, to make sure that were maximizing the right spend in the right places, and also to just make it feel more real, because visual effects can be great, but you still can kind of read real versus visual effects, even really great visual effects. So, as much as you can make it real, the better off everybody is. Just takes less time, if nothing else.We have those specific discussions with the DP, and with Julius, saying, If we do X, Y, and Z, then we know we can keep it in-camera. But when we know we have to go wide, because now were going to see the full White House behind us in these shots, looking down the opposite direction, then what are the cut lines? How do, as a production designer in an art department, how do we figure out where the right places to put cut lines into the sets are, so we make that transition as straightforward and believable as we can?So, we looked into, Where do the trees go? How dense do the trees need to be? We dont want really lacy trees, because that makes it very hard to extend past them. We want to use platforming to put the press on, that gives us a very solid line back behind there to do that. We had those conversations about how we could make the hand-off as practical and as straightforward as we could.b&a: Im also curious about any interaction you end up having with previs. I always talk about previs with the visual effects supervisor, but I often dont necessarily feel like theres always heavy involvement of the other departments, but perhaps you did on this one?Ramsey Avery: Well, in general, its super important to have the art department involved in the previs. Generally speaking, wed prefer to have the art department design the environments that the previs is being set in, so that we know ahead of time that thats stuff that we can build. Ive been on projects where previs just goes off and does stuff, and were like, Wait, wait, wait a minute. We cant do that, for any number of reasons. So its always best to have previs work with designs that the art department creates. Thats the launching point, based on the discussions that weve already had with the director and the DP about what they want to have happen in the scene.Ramsey Avery.As far as previs goes, we have meetings along the way to talk about, How are we going to do that? All right, so the Hulk is going to rip out a column from the back wall. How do we do that? Whos doing what in that case? There wed have a discussion with special effects about, What can they do, what can the set support, what does visual effects need to take over? We basically work with previs step by step, shot by shot to say, What do we have to do?One thing to note with destruction and with practical special effects, theres always a before and an aftermath. So in one shot youve built, you work out what the before is, and then theres whatever stunt work and special effects rig has to be to collapse something. And then, once that stunt and that special effects is done, then you dress in the aftermath. So as you work through the previs, youre looking for those beats, which then specifically talks to the director or the DP and the first AD about how theyre going to schedule the shoot. And sometimes, you have to do things backwards. You want to do the aftermath first, because its easier to dress all that mess in, and clean it out.Thats what we did with Hains Point and the cherry blossoms. All the destruction that happens in that, we actually built that first, and we built it on top of the undestroyed set. So we took the time to sculpt all the destroyed stuff in, quickly pulled that out, revealed the undestroyed set underneath that, fluffed that up, made that pretty, brought a few undestroyed trees in, and moved forward. So thats all done again by looking at previs or storyboards. We dont always need previs, but just working out scene by scene with everybody that needs to have a say in the matter to figure out what we do when. It becomes a very logistical process. It also is a design process, because sometimes youll look at a previs and you go, Thats just not working for the story. How do we adjust the design to get the story to work?b&a: You mentioned the confrontation there between Sam and Red Hulk, and the cherry blossoms around that are just beautiful. Can you talk a bit more about that, including the before and the after?Ramsey Avery: Well, clearly were not going to go to Washington, D.C. and shoot the real cherry trees in Washington, D.C. For just all kinds of reasons. Logistically, we were not going to be at the right time of the year when the cherry blossoms are in bloom, and were not going to take over the tourists being there. Thats the time they get to be there. So we know we have to build it and we have to control it. Were not going to grow cherry trees and try to get real cherry trees at the right time with blossoms. We have to build them. And again, we sit down and we work through, in this case, we had some rough storyboards, and we used the rough storyboards to figure out what kind of trees we needed and where to help tell the story.We laid out a basic footprint using about 24 trees. And then, I spent days sorting out through the storyboards how we could use that group of trees to represent each bit of the fight, because we have to use part of the fight here, then part of the fight is here, and then part of the fight is here. So how do we adjust cars, and street lamps, and trees, and whats broken and whats not broken during the fight, and how that all gets worked out. And so all of that becomes a process of discussing with stunts, and special effects, and visual effects, and the DP, and the first AD, and the director about how to get all of that pulled together.We basically did, again, a digital model, and then I did the 2D planning of how we make all the stuff make sense. Then, we reviewed all of that in conjunction with the previs, which was being developed simultaneously. And then, we built a model of it, and then to actually do the destruction, because that whole destruction came later in the decision process making, we didnt have time to really do that whole process over again. So what we did is we did a whole bunch of research about destroyed roads and how did they look in an earthquake or a landslide, and then how does an impact crater work? Just all the types of things that might be necessary, visual wise, for the storytelling.And then, I worked with a sculptor, a model-making sculptor. I did a real rough sketch for him and I said, I think this is where the lines need to be, to draw the focus in the shape language, so we know the camera is always pointing where we want. The line work in the set is pointing to where we want it to point. And I worked with the sculptor to develop all of that, and then basically, that maquette became the guide that the sculptors built in real scale.On a soundstage, we built all of that in scale, and walked the director and the DP through it at a couple of points, and agreed that that was what it was, finished it on the stage, and then we were shooting the Rose Garden in the same place we had to shoot the cherry tree set. So we had a very quick turnaround. We struck out, we were building the trees in one place, we were building the practical broken road in someplace else, and then we strike out all of the Rose Garden, and we bring the trees, and we bring the destroyed sets in. We paved the roads, so we have a real road. We put the real grass mats underneath all that, and then put the mess on top of it, shoot all of that, pull everything away, put the trees back in, and then shoot the fight. That happened in the space.b&a: Ramsey, my readers will be obsessed, I think, with knowing what the scale model maquette is made out of, or what materials you tend to use for that sort of thing? Is it foamcore, for example?Ramsey Avery: Its just foam or foamcore. I mean, for doing the sculptor work, youre using a gray foam or a yellow foam, a dense foam that gives you some chance to do some detailing going in it. You go to the train model store, and you buy the trees, and you flock them. You flock your trees based on the scale that youve worked out in the digital model, about how big and how wide the trees need to be. You do the flocking, and the grass, and all of that.b&a: Ramsey, these films are so big, and youre so heavily involved obviously in pre-production and then obviously production. Im always curious if you get a chance to liaise directly with the visual effects studios when theyre doing their CG builds, because of course, in some ways, that mirrors your work completely. But timing wise, Im not sure youre always able to do that. Was that possible here at all?Ramsey Avery: Well, in this case, well, honestly, in most cases, it just simply isnt possible. As a designer, youre onto your next project by the time all of thats underway. Also, to have you on keeping to provide input, in regards to the union rules, well, they have to pay you for that time. So the studios want to kind of minimize that input. The issue then becomes how much can you have those discussions while were all together, and build the framework ahead of time, so that everybody agrees that this is what youre doing?Concept art for the battle around Celestial Island.Ill usually get some questions along the way. Somebody will send some images to me or theyll say, Did you mean this? Or, sometimes Ill come in and Ill take a look at a collage. It depends on where I am in the world. Like, if Im in India and theyre doing the work in LA, theyre not going to send me something online to look at it on my computer screen. It doesnt make a lot of sense, in terms of an animation. So generally speaking, what we all do in the world of production design and visual effects is try to sort all of that out as cleanly and as completely, get as much illustration work done, get as much modeling done, and then hope that that all works out.Ill say that there was a huge backstory in the film to the Celestial Island in terms of adamantium is and what happens in there. We did a lot of work, working all of that out. And then, in the movie, because of various other reasons that happened in the storytelling, it was not in there. So sometimes, you do a lot of work and everybody agrees, but then something changes. And thats just the nature of the creative process. It just happens.Concept art for the battle around Celestial Island.b&a: I wonder if youd like to talk about another set piece or location that you were particularly fond of or that was particularly tricky, just to break that down as well?Ramsey Avery: There was a set for Camp Echo One, the prison set. A couple things about that that I was really happy with. In the script, it happened initially in Yellowstone. And Im like, Why is it in Yellowstone? That doesnt make any sense to me. And besides, how does Sam get to there in the timeframe of the storytelling? It involved having to have a whole airplane flight and a conversation, and it was a whole bunch of stuff that just didnt help the storytelling. So, I was trying to figure out where else it could be. Part of the thing is that it needed to be in a black spot, right? Someplace where there was not any communication available.I was doing some research, and I realized that there was this place in West Virginia that was the National Radio Telescope Observatory. And because of that, it needed to keep the air clean of radio waves. So for 50 years, there have been no radio waves allowed in this city. Theres no cell phones, theres no Wi-Fi in this town. And so its like, Well, thats a cool place, and thats a great place to hide in plain sight, right?I assumed that we would end up having to find some base of the radio telescopes and then visually effect this extension to make the things. But as it turns out, an hour outside of Atlanta, theres some decommissioned radio telescopes in a forest, so it was a great find. And then, you can go to the director and to Marvel and say, Look what we found, and this is how it works, and this is why it makes sense. And theyre like, Hey, you are saving us some money, and it also looks great. So that was all a really good surprise, and I really liked the way that that looked. But then, underneath of it, we actually started with a much more Marvel design to this. But part of what this movie was really trying to be, one of our key words was grounded. We ultimately didnt want to do those big Marvel gestures.So, I designed a set look for all of Camp Echo One that I was really proud of. It had a panopticon kind of center tower, and a great, big, swooping environment, and it looked really cool, and it felt kind of grungy and period, and all of that was great. But as we looked more into it, its like, It just feels too grand for the movie were trying to tell. We want something that feels more real.The other thing is that we also had a storyline going on that we wanted to tell a story of people being kept in boxes, and as they free themselves, the spaces around them become more open. So Sam starts off in more enclosed spaces, Ross starts off in more enclosed spaces. In that very first scene, hes in that little, tiny backstage area with all this glass, kind of caged in the corner. Everything we wanted to do wanted to support that.So, instead of designing a prison that had a big, open vault where you could see all the prisoners at once, it felt better to actually put it in small, contained corridors, and figure out how to make that make sense within the action and the story beats that we wanted to have. And then, from that, we could tell a whole story about what was built in the 50s, what was added in the 60s, what happened in the 80s. We could visually layer up all of that in terms of the architecture and the design of the space, and then also tell the story about how Sterns figured out mind control, in the sense that we were doing some research, and there was, in the 50s, a lot of exploration in terms of how lighting could control the moods of prisoners.We took the idea that there were these lighting controls within the spaces that were designed specifically to calm and control the prisoners. And Sterns figured that out, and then, he did a whole process of figuring out how to adapt that and turn that into literal mind control. And in the one room that Joaquin goes into, you can see each step of the way has been done in dressing, and from the big lights down into smaller and smaller, more miniaturized versions.So all of that storytelling, its thematic. It tells why its a horrible place for Isaiah to be. Its also why its a horrible place for Stern to be. It fits within the tones and the themes of the movies that we want to tell, color and shape, language and boxes, and not boxes. And I just felt like thats one of those places where you can kind of consolidate something thats sort of this really big, grand, epic, sci-fi-y idea, and became much more grounded, and in that case became better.b&a: Sterns lab was also fantastically designed. It just felt like I wanted to be in there a bit longer and have a look around at all the crazy devices and whatnot.Ramsey Avery: Yeah, I mean, each of those devices, we did a lot of research into psychiatric care in the 50s and 60s, and so everything came from a real thing. And then, adding on top of that, the stuff that Sterns wouldve needed to do his further development, and what Ross wouldve provided to him to do his further development. So again, theres this whole layering of 50s tech with much more contemporary tech, and bioengineering, and that kind of medical studies added on top of it. So yeah, actually, I mean, our set decorator, Rosemary Brandenburg is brilliant. Shes just brilliant.And so we had these discussions to begin with, and talked about the specifics of all of that. And then, you get into the discussions like, Julius wants to have glass and reflections. It meant we can play with that idea of transparency and reflection in the space, so that adds another layer to the idea of it. And then, you have all the stunts, and so you have to figure out what the action can be, and how does the design support the action? What are the props? Whats the dressing? What does the stunt guy want to do? You know, he wants a needle, or he wants a tube, or these things that the stunt guy comes up with.Then, you have to figure out a way to work into the visual reality of the set that youre putting together. Its all fun. I mean, its all such a puzzle, and all that, and working with everybody there together, thats probably my favorite part of the whole process. Its just all these great, wonderful, talented, smart people that you get to bounce ideas around with all day long.The post The cross-over between production design and visual effects appeared first on befores & afters.
    0 Comments ·0 Shares ·109 Views
  • Behind the face replacement VFX and **those** lasers in the craziest scene from Sonic the Hedgehog 3
    beforesandafters.com
    befores & afters goes behind the scenes with Rising Sun Pictures.In Jeff Fowlers Sonic the Hedgehog 3, Jim Carrey plays both the mad scientist Dr. Ivo Robotnik, and Ivos grandfather, Professor Gerald Robotnik. One particularly memorable sequence in which the two characters appear together occurs when they must navigate through a laser gridsomething they do in spectacular dance style, to the tune of The Chemical Brothers Galvanize.A laser dance with two characters played by the same actor brought with it, of course, a number of visual effects challenges. The first was dealing with seeing both characters on screen at the same time. This twinning work required a combination of Carrey completing A and B plates for Gerald and Ivo in different sets of make-up effects for dialogue moments, as well as the use of body doubles and stunt performers standing in for the actor in the more dynamic dance sections of the laser scene.Then, Rising Sun Pictures, under visual effects supervisor Christoph Zollinger, who worked with production visual effects supervisor Ged Wright, relied on its proprietary REVIZETM machine learning and visual effects tools to carry out detailed face replacement work. For the lasers, the visual effects studios undertook this task using effects simulations, creating a dazzling array of red-toned patterns in the room Gerald and Ivo are in, as well as on their bodies.Two Jim CarreysThe laser dance was filmed primarily with body doubles in Gerald and Ivo make-up effects for the high-energy dance portions of the scene. Carrey performed key dialogue moments, and for times when the action was more static in nature. That meant, depending on the shot, Rising Sun Pictures had to take different approaches to face replacement, that is, replacing a double with the recognizable face of Carrey.Director Jeff Fowler and Jim Carrey on the set of Sonic The Hedgehog 3 from Paramount Pictures and Sega.Sometimes, shares Rising Sun Pictures CG supervisor Mathew Mackereth, there was a face replacement on just one of the characters, say it if was a traditional back of the head-type gag. Sometimes it was face replacement on both characters. In nearly all cases, we had face replacement to do the beauty work of putting Jims face on both Ivo and Gerald. Plus, we also had to do a full 3D matchmove in order to get the suit and the lasers on. Fortunately, the lasers never hit the heads of Ivo and Gerald, so that meant that the difference in topology of, say, Gerald from a stunt double, was a problem that we didnt have to deal with.For the face replacement work itself, Rising Sun Pictures has for several years been implementing a face swapping approach which it now calls REVIZETM. The overall methodology was to train a machine learning model of Carrey as his two alter egos that could then be used to composite on top of the doubles. Immediately, the VFX crew knew this would be tough, since the actor is so recognizable, and had already closely inhabited the part of Ivo in two previous films.Jim Carrey is probably one of the harder people that youll have to do this to, comments Rising Sun Pictures machine learning 2D supervisor Robert Beveridge. He has such an expressive face with such a big range of motion and everyone knows what he looks like, how he acts, how he moves.The process began with capture of Carrey in both of the two different make-up effects designs. Says Beveridge: It was a combination between a controlled capture session that we treat as a foundation layer and capture a wide gamut of things and then because we had him in this environment for a large portion of this dance scene, thats where majority of our primary material came from. He was playing both sides of this, and in some cases, we had a B plate, which fed the machine learning process really well. So, we got Jim in this environment, but we also had to recreate him for both of these dance doubles in an extremely complex lighting setup.Just him playing the B side of himself gave us a really big pool of data to source from and a really great library that we could reference, adds Beveridge. In the end, we tallied it up and we had 650,000 images that went in across that sequence. That amount of frames and the curation that the machine learning team had to do to be able to hit Jims expression range, which is, Id say, larger than the average humans, was a real challenge.One of the trickier aspects of the Ivo/Gerald shots involved facial hair. Gerald has a large mustache and bushy eyebrows. We had to make decisions on how we shot him, details Beveridge, and whether he had that on in the capture, whether he didnt, whether we were going to replace all of that, or maintain it from the double. The prosthetics for the doubles were not quite as involved as Jims full prosthetic, which definitely made the blend points where the mustache sat under the nose tough. All of these little intricacies were different on the two of them. So, it became a hybrid of the two where having that source in the plate from the dance doubles was super valuable. But we did train for mustache and eyebrows. Fortunately, he didnt have any hair on both of his characters, which definitely helped us out there.The result of Rising Sun Pictures training model was a full head that then needed to be composited onto the body of the performer. This required, firstly, detailed matchmoving, something further complicated by the fact that different performers with different head and body shapes stood in for Ivo or Gerald. Earlier, in the training process, this use of various dance doubles and stunt doubles meant there were, as Beveridge describes, multiple different targets that we had to allow for. For each new target that you introduce, you have to work out the same relationships and how the two and their features are different. You have to make that calculation for however many you might be dealing with.Compositors spent time narrowing in on the particular features unique to Carrey. We found, for example, says Rising Sun Pictures compositing supervisor Antony Haberl, that Jim Carrey had different size ears compared to the stuntie. With features like that, if you dont get right and you dont nail it 100%, youve got an image that doesnt look right and you dont know why. The challenge is being able to narrow in on all the fine detail that makes a person who they are and recognize what those things are and grab onto it and dont let go until its right.Rising Sun Pictures machine learning VFX toolset REVIZETM is integrated into Foundrys Nuke. Although the resulting model does not necessarily provide artists with the same traditional render passes, AOVs or alpha sets, the compositing process remained largely the same as any other project. What Rising Sun Pictures is able to do with their machine learning techniques, however, is adjust elements such as eye direction in machine learning output. Having control over the results and then continuing to allow for performances to be modifiedrather than putting things through a machine learning black boxis one of the key goals of Rising Sun Pictures process. Ultimately, what we are doing is saying, what makes Jim, Jim? points out Mackereth. And, what makes Ivo, Ivo? What makes Gerald, Gerald?Also, lasers!As Ivo and Gerald dance through the room they are in, their suits are hit by and reflect a multitude of (mostly) red lasers. For the shoot with the Carrey doubles, a pre-programmed laser show had been devised and filmed. Rising Sun Pictures role was to take this much further and provide a larger sense of play and interaction with the choreography of both characters.Part of the challenge for us was, exactly what do we want to see on screen? says Mackereth. With some visual effects that you put together, there might be a nice reference you can look to. In this case, we did struggle to find something that was analogous. We started with concepts, where we drew some lasers and some suits and how they might interact. We had some motion tests of lasers flying around the place. What was a real strong catalyst for us was a clip that we found where there was a dancer who was dancing inside a laser field and shed covered herself with little hexagonal mirrors. That was the aha moment where we realized they were not playing too much with the lasers. Theyre really just enjoying being in this space and the lasers are working around them.Double your VILLAINS, double your FUN!! Heres an early piece of #SonicMovie3 concept art we created to showcase the idea of two Robotniks breaking into G.U.N. headquarters! pic.twitter.com/shUlDbiSAh Jeff Fowler (@fowltown) December 27, 2024Armed with a stronger sense of the look, Rising Sun Pictures then started work on the lasers in the layout department. Here, explains Mackereth, we built some tools for previs so that we could fire a laser at matchmoved characters and essentially then get the laser to interact with the character and then fling around the place to reflect in real-time in Maya. We had quite a lot of control over, say, if we had an incident laser coming in, then the reflected lasers, we could pattern them, shape them, move them in a space to be visually pleasing. We had a lot of control over the reflections in an artificial way.A turning point also came when we made the decision that we had to throw the background that had been shot away, adds Mackereth. They meticulously shot this beautiful hallway with the beautiful choreography, but both the client and us decided that it was no longer suitable. That was a huge creative challenge, but it also freed us to then explore how our lasers would operate in the space.Then, the FX department at Rising Sun Pictures took that initial choreography and realized the lasers with effects simulations. A particularly tricky aspect of the work was treating the characters bodies and suits as highly reflective items. It was all about, how do we show a true reflection of the laser coming off the suits? details Rising Sun Pictures FX lead Kurt Debens. What we found really quickly was that it was either not enough or too much. We couldnt track anything whats going on, essentially. So, it became, how do we make it correct for our shots and how do we adjust those angles of reflection? How much fanning on the reflected lasers? The overall challenge for effects was making sure it was accurate and well-represented from what layout were producing, but then also having that control to be able to create a picture that was pleasing to look at.The lasers were designed to emanate from specific points on the wall and ceiling of the space (which was ultimately an entirely synthetic environment). In determining the final look of the lasers, this was something that came down also to compositing, as Haberl breaks down. A real laser would be virtually invisible. Its very thin. It comes in, it goes away and it doesnt do much. There was a big exploration of just finding that middle ground of the fantasy of it and the joy of it versus the strict reality of it.The post Behind the face replacement VFX and **those** lasers in the craziest scene from Sonic the Hedgehog 3 appeared first on befores & afters.
    0 Comments ·0 Shares ·89 Views
  • Issue #28 of befores & afters is a full issue on Paddington in Peru
    beforesandafters.com
    Issue #28 of befores & aftersmagazinecovers Framestores visual effects for Paddington in Peru.The issues goes in-depth on the CG bear and other creatures in the film, as well as the shooting methodology, which involved background plates filmed in Colombia and Peru, and principal photography with actors on partial sets and bluescreen in the UK.It features visual effects supervisor Alexis Wajsbrot, animation director Pablo Grillo, previs supervisor Vincent Aupetit, postvis supervisor Michelle Blok, and real-time technical director Rob Taheij.Inside, youll find a wealth of before/after imagery, on set shots, previs and postvis and more behind the scenes.Find issue #28 at your local Amazon store:USAUKCanadaGermanyFranceSpainItalyAustralia JapanSwedenPolandNetherlandsThe post Issue #28 of befores & afters is a full issue on Paddington in Peru appeared first on befores & afters.
    0 Comments ·0 Shares ·99 Views
  • The visual effects you may not have noticed in Maria
    beforesandafters.com
    Behind the invisible environment, crowd and clean-up VFX in the Angelina Jolie film.Maria visual effects supervisor Juan Cristbal Hurtado was working away in his office a couple of years ago when he received a text from director Pablo Larran (who he had worked on the film El Conde with). Pablo asked me if I had a European passport, Hurtado recalls. I said, No, should I have one? Then, it was silence for two weeks. Then Pablo rang again and said, Im doing this film and I have some questions about how to do the audiences in the theaters. Can it be done?What Larran was asking Hurtado about was filling a range of opera theaters for his biopic on opera singer Maria Callas (played by Angelina Jolie), which largely takes place in 1970s Paris, just before her death. While Hurtado knew that digital crowds could certainly be achieved, he was also aware from past experience with the director that Larran would likely want to move the camera significantly, and perhaps even go handheld, in the theater shots.Crafting crowdsHurtado had relied on visual effects studio PFX for some scenes that involved a moving camera on El Conde. He had also seen their work on the frenetic basketball sequencesmany of which were filmed with a camera operator on rollerbladesin Winning Time. I asked PFX if they could do moving crowd shots for Maria and they were of course up for it, says Hurtado. These opera scenes would be shot in real opera houses, including Teatro alla Scala in Milan, Italy. That meant there would be limited time to film scenes, and crowd plates. We only had four hours to film at alla Scala, notes Hurtado, which meant we would not have time to film crowds the old-school way, which is to have extras and shoot tiles and then move them along and shoot more tiles. Not only that, that method only tends to work for static shots, not moving ones.Instead, Hurtado partnered with PFX visual effects supervisor Jindich ervenka to conceive of a crowd solution, one that would accommodate filming moving shots on Steadicam, Technocrane and handheld, and in varying formats including 35mm, 16mm, 8mm and digital. It was also an approach that allowed for several different opera house venues to be filled with crowds, sometimes upwards of 1,800 individuals, while also matching to real extras that could be filmed, all wearing period clothing, care of costume designer Massimo Cantini Parrini. The other thing was, adds Hurtado, in these opera house shots, you can see the faces of the people. A football or soccer stadium is super-big, and if youre doing crowds there you dont always see the detail on the faces. But here we did, plus all the details on their tuxedos and costumes and accessories.Ultimately, a 3D crowd replication approach was decided upon, where, alongside environment work for the opera houses achieved with the aid of Lidar scans, lightning reference, texture and photogrammetry, PFX built the crowd assets to populate each theater. 3D crowd assets meant that the camera could move in any direction. This process began with scans of around 80 extras in Budapest with a 125 camera array of Nikon D5300 cameras provided by Budapest based company DIGIC. ZBrush and Maya were used to clean and process the data from scans, with animation of the crowd assets achieved via motion capture data and keyframe animation, and then a crowd system developed by PFX was built in Houdini, and rendered in V-Ray. Final compositing was handled in Nuke. One particular challenge was to figure out the right kind of crowd movement for the opera attendees. One thing we had to work out was fan waving, says Hurtado. The extras had fans, and also those very recognizable eyeglasses with the binoculars. Luckily, they are intended to be opera-goers, so they didnt stand and wave and cheer like other crowd spectators.Making a 1970s ParisIn order to deliver an appropriate period settingpredominantly 1977 ParisHurtado worked directly with production designer Guy Hendrix. The walls of his production office were full with imagery of the time period. Especially of cars of the period, and just everything that made up the color palette of the film. In general, exteriors were filmed mainly in Paris and Budapest, standing in for Paris. It would be up to visual effects to remove the many modern day artifacts that existed in the plates, from traffic to buildings, pedestrians and even graffiti, and to add in the period-correct environments with matte painting and set extensions. It was a matter of figuring out whats modern and what is not modern, observes Hurtado. Paris is very tricky in that sense because many things that you think might be modern are not so modern.A scouting phase involved filming several city plates in Paris with only a small visual effects unit. The idea here was to provide imagery for the creation of matte paintings, as well as photo and texture reference, HDRIs and photogrammetry captures of streets and buildings. I created a document, explains Hurtado, where I would take pictures of where the camera would shoot on Paris or Budapest, marking out how clean it should be, or Id do a quick Photoshop of changes or potential set extensions & DMPs. The intention was to not have any big surprises when we got to the main shoot. A sequence taking place in Place Vendme in Paris was typical of the environment visual effects work. It involved significant clean-up of modern street poles and parking entrances, as well as re-lighting of the surrounding buildings and a matte painting to augment a part of the background street. I love doing that stuff, marvels Hurtado. There was a lot of sitting down and drawing and imagining what it was like in the 70s.Indeed, invisible clean-up VFX work made up several shots in the film, including even for a yacht. We shot inside the actual Christina O, the original Aristotle Onassis yacht, discusses Hurtado. Nowadays, because of security reasons and safety protocols, it has all these fire sprockets and fire alarms in the ceilings, about every two meters. Theres this sequence where theyre playing roulette and we had to clean up the ceiling, all through moving hands and smoking. We really were looking to go for every detail that we could. For scenes that were filmed in Budapest to appear as if they were filmed in Paris, Hurtado mentions that the Hungarian city of course has many European-style buildings and architectural qualities that resemble Paris streets. Sometimes, however, there were more obvious eastern European or Austro-Hungarian influences on the environments, wherein the visual effects team would make augmentations.Also, notes Hurtado, it doesnt matter the city you are in, there are many modern things like antennas, wires, modern traffic lights, street poles and bicycle lanes where they paint the lane yellow or red. It was a challenge to replace and erase all those things, especially with actors walking across frame followed by a steady cam.An array of VFX challengesFour VFX studios carried out the work on Maria: PFX, Automatik, Control and Panolab. They would be responsible for crafting 390 visual effects shots. An additional challenge they faced was dealing with a range of formats: 35mm (Arri LT Camera), 16mm (ARRI 16, Bolex H16, Aaton LTR Super 16), 8mm (Braun Nizo and Kodak), digital (ARRI ALEXA 35); and with an array of lenses: primarily Cooke S4 lenses and Ultra Baltar refurbished lenses, an ARRI Ultra Prime 10mm, among others. All this was carry on with the aid of Natalia Blajeroff, the post production supervisor for the film.Like any visual effects production, this involved mapping lenses and dealing with various ingests for the different formats. Some of the 8mm and 16mm film frames were the trickiest to deal with, says Hurtado, in terms of floating stock, and grain. Pablo loves the images with texture. If we were working on something that was next to a shot done on 8mm, but had been filmed on 35mm, we would do things to match the 8mm.Oftentimes, this work on different stocks was done to match key photographs of Maria Callas in flashbacks from the 1940s up to 1970s. Here, too, visual effects aided in minor beauty enhancement to the practical hair and make-up effects.All the while, Hurtados mantra was to allow the director to always shoot in a free-form manner, and that the VFX shots would match this. I always aimed to give him something where he could play freely. He could do whatever he wants, move the camera, and it wouldnt be an issue.The post The visual effects you may not have noticed in Maria appeared first on befores & afters.
    0 Comments ·0 Shares ·115 Views
  • Thunderbolts*, The Perfect Storm, live make-up effects session and the history of Nukeall coming to FMX
    beforesandafters.com
    A special preview of the Then & Now sessions coming to FMX.FMX is happening May 6-9 in Stuttgart, and befores & afters will once again be there hosting the Then & Now track, with some incredible speakers covering the latest films, old-school films, old-school techniques, and software history.Tickets are already available here.Below, check out a preview of the sessions.VFX supervisor Jake Morrison shares the VFX secrets of Thunderbolts*.Marvel Studios Thunderbolts*, from director Jake Schreier, tells the story of depressed assassin Yelena Belova alongside the MCUs latest band of misfits. To share how the old-school and new-school visual effects for the film were achieved, production visual effects supervisor Jake Morrison will sit down with befores & afters Ian Failes for this in-depth discussion. Well dissect the approach to the VFX, including for specific scenes, revealing behind the scenes of the planning, shoot, and execution of the films biggest moments. Youll also get a chance to ask Morrison about how he shepherded visual effects teams from around the world to make Thunderbolts*.Make-up effects artist Begoa Fernndez Martn will conduct an in-person make-up effects demo live on stage!In this special Then & Now session, special make-up effects artist Begoa Fernndez Martn will transform a volunteer live on stage at FMX using make-up effects and prosthetics. Youll be able to get up close and personal with this practical make-up effects work, and ask Begoa about her process as she works. befores & afters editor-in-chief Ian Failes will also be on hand to chat to Begoa about her career. Plus, as a special bonus during the week, Begoa will be doing live make-up effects demonstrations on the FMX floor at her booth.Celebrating 25 years of The Perfect StormThe visual effects task on Wolfgang Petersens The Perfect Storm (released in 2000) was monumentalplacing real men on a fishing trawler against the fiercest of storms, and even one 100 foot wave. It would require the latest fluid simulation R&D and artistry from Industrial Light & Magic, overseen by visual effects supervisor Stefen Fangmeier. In this retro session, Fangmeier will be joined on stage by befores & afters editor-in-chief Ian Failes to break down the big breakthroughs at the time, share stories from the set, and answer your questions about what had to be solved all the way back then.Nuke: from its beginnings to todayJonathan Egstad worked at Digital Domain as an artist and supervisor at the incredible time that Nuke was born. There, he had a keen hand in how it would be designed. After working at other facilities including ImageMovers Digital and DreamWorks Animation, Egstad is now Senior Product Innovation Manager at Foundry, again working on Nuke. With befores & afters editor-in-chief Ian Failes, Egstad will share untold stories from Nukes beginnings, and answer your questions about the history of the compositing tool. Youll even get to see some old-school Nuke interfaces!The post Thunderbolts*, The Perfect Storm, live make-up effects session and the history of Nukeall coming to FMX appeared first on befores & afters.
    0 Comments ·0 Shares ·105 Views
  • On The Set Pic: Severance
    beforesandafters.com
    From that opening episode of s2 of Severance (posted by EP and director Ben Stiller).Hey Aaron i took this shot of Adam lining up the Bolt arm where the camera does a 780 around him near the end of the shot. Its a motion control arm that can repeat moves. Ben Stiller (@benstiller.redhour.com) 2025-01-17T21:16:55.598ZThe post On The Set Pic: Severance appeared first on befores & afters.
    0 Comments ·0 Shares ·135 Views
  • Constantine behind the scenes to revisit
    beforesandafters.com
    It was recently the 20th anniversary of Constantine, and Warner Bros. released a whole bunch of bts featurettes from the home video release. Watch below!The post Constantine behind the scenes to revisit appeared first on befores & afters.
    0 Comments ·0 Shares ·144 Views
  • The Wicked VFX Notes episode is here!
    beforesandafters.com
    Join Ian and Hugo as we discuss the VFX of Wicked.The latest episode of our 2025 VFX Oscar nominees season of VFX Notes has arrived, and its on Wicked! Go deep into all the CG creatures, the Defying Gravity flying scene, and the extensive digital environments and extensions, plus all the practical effects work.You can help support VFX Notes at the dedicated Patreon, too.The post The Wicked VFX Notes episode is here! appeared first on befores & afters.
    0 Comments ·0 Shares ·95 Views
  • Making a mint
    beforesandafters.com
    Behind DNEGs work on Skeleton Crew.In this interview with DNEG visual effects supervisor Chris McLaughlin, befores & afters learns about the studios involvement in Skeleton Crewfrom the Mint, to helping craft Neel shots, and the desolate surrounds of At Achrann. Includes lots of fun behind the scenes befores and afters.b&a: Tell me about the build for the Mint can you discuss the process of taking concepts and also any VAD work for this and starting the build? What kind of live action plates were filmed? What were some of the main challenges of dealing with scale, selling the vastness and adding in details, including robots and other elements?Chris McLaughlin: The Mint environment can be broken down into three main areas the landing platform and lift shaft, the Credit Maker, and the vault.For each of these areas, we were provided with concept art and a previs model. Using these references, we began by creating a blocking model, gradually adding more detail to each component.Since these were fully CG environments, the filming took place against a bluescreen, with only a few key set elements built practically the landing platform floor, the spaceship ramp, and a full-size security droid puppet. Everything else was entirely CG. The onset lighting was carefully planned to account for what would replace the bluescreen. We had many lighting cues to work with in CG for example, a strong, hard, warm light source behind Jod Na Nawood (Jude Law) to represent the entrance of the vault, as well as vertical chaser lights to simulate the motion of passing lights as the landing platform descended down the lift shaft.The landing platform itself was largely pre-designed, as it had been carried over from previous sequences handled by another vendor. Our main task was to slightly increase its resolution and texture the areas closest to the camera in our shots. However, the lift shaft, through which the landing platform descends, needed to be built from scratch. The Credit Maker, visible from the landing platform in a few shots, was inspired by a piece of concept art. To effectively communicate the scale of the machinery, we animated its moving parts such as robot arms with slow, deliberate movements, making them feel heavy, as though they were struggling under their own weight. Additionally, we populated the surrounding airspace with sky sleds at varying distances to reinforce the sense of scale.For the vault, conveying its vastness and, by extension, the sheer amount of credits was crucial to the story. However, since the vaults design was relatively simple and repetitive, we relied on shot composition, depth cueing, defocus, and additional atmospheric elements to enhance its perceived size.Across all three environments, we drew inspiration from several key visual references, including the Boeing Everett Factory, Cardington Film Studios, and the Son Doong Caves. We also found valuable references within the Star Wars universe, particularly in The Last Jedi, which features incredible shots inside the Mega Star Destroyer essentially a massive hangar for the Imperial army. b&a: Can you break down how the team tackled a typical head replacement and animation shot for Neel going through all the steps from ingest to final delivery?Chris McLaughlin: In most of his shots, the character of Neel was portrayed by an actor wearing an animatronic head, created by Legacy Effects and operated by a team of puppeteers. The animatronic head featured a semi-transparent panel that allowed the actor to see, which we digitally removed whenever it was visible. Full CG head replacements were relatively rare, occurring only in situations where the prosthetic head would obstruct the actors vision or compromise their comfort and safetyfor example, when running down the ramp of a spaceship.For these shots, we ingested a Neel digidouble, created by ILM, and carefully tracked the actors body. Ensuring a precise track around the neck and shoulders was crucial so that the CG head felt seamlessly integrated with the bodys movement.Our animators primary reference was the performance of the animatronic head. We ensured that our rigs capabilities matched those of the practical version, so the CG animation never felt exaggerated or unnatural. As a final step, cloth and hair simulations added subtle deformations and movement to Neels ears, trunk, and hair, enhancing realism.Light matching was essential in blending the CG head with the practical elements, but it posed a significant challenge Neels skin coloring was quite complex, featuring a mix of blues, greys, and pinks. It took several rounds of compositing refinements to perfectly dial in the look and match the colors seamlessly.Ultimately, the team did an outstanding job across all departments, and I think it would be difficult to distinguish which shots featured a CG head and which were practical.b&a: Could you pick just one other scene/sequence/character or asset that DNEG was responsible for and talk about the particular challenges when you tackled it, what you had to solve, and why you thought it was successful?Chris McLaughlin: For our work in Episode 4, we were tasked with building the suburban outskirts of At Achrann a desolate, war-torn, region on a distant planet. We began with concept art provided by the shows art department, which set the tone for the environment. While a few practical set pieces were constructed for interaction, the vast majority of this world was entirely CG. Any surface that characters walked on or physically interacted with was a practical set, but everything beyond that from the ruined skyline to the distant hazewas digital.Our team built an extensive library of destroyed buildings, including a bombed-out school, as well as an arid, decaying forest, all meticulously crafted to enhance the war-torn aesthetic. To increase the sense of destruction, we carefully designed and textured the buildings to reflect the war-torn setting, drawing inspiration from real-world conflict zones while maintaining a distinctive Star Wars look. The concept art portrayed a landscape shrouded in fog and mist, creating an ominous and foreboding atmosphere. To match this aesthetic, our FX team developed a library of rolling mist and fog simulations, which we carefully integrated into every shot, ensuring consistency with the practical smoke effects used on set. Overgrown vegetation, scattered rubble, and dried-out, dying trees added further layers of realism.This sequence also features one of my favorite shots a moment that feels quintessentially Star Wars as Jods ship, The Ironclad, descends through the low-lying mist, its white-hot engines ignite the air, sending smoke, sparks, and debris billowing toward the camera as it touches down. All images courtesy of DNEG 2025 Lucasfilm Ltd. & TM. All Rights Reserved.The post Making a mint appeared first on befores & afters.
    0 Comments ·0 Shares ·128 Views
  • On The Set Pic: Sonic The Hedgehog 3
    beforesandafters.com
    Director Jeff Fowler on the set of Sonic The Hedgehog 3.The post On The Set Pic: Sonic The Hedgehog 3 appeared first on befores & afters.
    0 Comments ·0 Shares ·89 Views
  • See Rodeo FXs breakdown for Sonic The Hedgehog 3
    beforesandafters.com
    Go behind the scenes.The post See Rodeo FXs breakdown for Sonic The Hedgehog 3 appeared first on befores & afters.
    0 Comments ·0 Shares ·99 Views
  • Watch Scanlines VFX breakdown for American Primeval
    beforesandafters.com
    The post Watch Scanlines VFX breakdown for American Primeval appeared first on befores & afters.
    0 Comments ·0 Shares ·134 Views
  • See Rodeo FXs VFX breakdown for Red One
    beforesandafters.com
    The post See Rodeo FXs VFX breakdown for Red One appeared first on befores & afters.
    0 Comments ·0 Shares ·136 Views
  • The Better Man VFX Notes show is here
    beforesandafters.com
    Hugo and Ian discuss the film, the VFX, and Robbies eyebrows.This week on VFX Notes, a new entry in our season on the 2025 VFX Oscar Nominees. Hugo and Ian discuss Michael Graceys Better Man, the biopic where Robbie Williams is played by a CGI chimpanzee. We discuss the film, talk about the cinematography, the VFX, compositing, and simulations from Wt FX, the tech, and talk about some of our favorite sequences.You can help support VFX Notes at the dedicated Patreon, too.The post The Better Man VFX Notes show is here appeared first on befores & afters.
    0 Comments ·0 Shares ·134 Views
  • The making of Chistery and the monkey guards from Wicked
    beforesandafters.com
    An excerpt from befores & afters print magazine.In Jon M. Chus Wicked, the Wizards monkey guards were CG creatures created by ILM. Jon wanted a powerful-looking creature, outlines ILM animation supervisor David Shirk, so art exploration led us to combine elements primarily from larger apes like chimpanzees, baboons and orangutans, with a characteristic monkey tail. Rather than waddle upright on two legs, a more powerful quadruped walk was developed and was the principal locomotion along with a physical size that made them feel intimidating next to the human characters.Early ILM animation testing explored an orangutan-based walk, says Shirk. But the characteristic balancing on the sides of their feet was traded for a more grounded and much heavier soldier-like feeling. From our main hero monkey, we developed multiple variations to populate the army of monkeys featured heavily in the films third act.ILM concept art.On set, stand-in performers rehearsed and worked on-camera with the principal actors to aid with interaction, eyelines and framing. Shirk notes that any extreme stunt performance was left to animation. For acting beats, he says, particularly in the case of Chistery, who is captain of the monkey guards, the on-set team gave us a starting point for physical performance and placement but acting choices were left to post-production and grew organically from the edit as it developed.We used an unusual approach to arrive at the acting beats, continues Shirk, who notes that a proprietary Face Select toolset was used by ILM. In collaboration with the director, I worked with the animation team to create close-up live performances, delivering multiple options per shot that were used in editorial to define Chisterys acting performance, then used that as the template for animation. Final animation consisted of hero keyframed action.At one point, Elphaba reads from the sacred Grimmerie spellbook, resulting in the monkey guards transforming to sprout blue wings. There was a lot of talk about transformation because it was obviously something that was very painful, recalls visual effects supervisor Pablo Helman. Jon directed the animators to do certain things in Zoom sessions, working with David. In fact, for one of the shots, Jon kept saying, I can see David Shirk right there!The transformation scenes were a bit of a tightrope, weighs in Shirk. The filmmakers wanted the effect to be visceral and scary but not excessively grotesque or too horrific. The on-set performers gave us a strong starting point for blocking, especially in defining how Chistery would travel through the space as he scuttled, rolled and writhed. As we had many monkeys to depict in this process, an exploratory mocap session was also invaluable to try out many types of actions quickly.We learned that playing up confusion, fear and bewilderment but being judicious in depicting pain in the crowd reactions helped to soften the edge, adds Shirk. It was a rule that carried over to any close-up facial performances throughout the scene. As always, for key beats involving emotional performance, delivering multiple vid-ref takes helped us to home in on what the filmmakers wanted from the characters.For the wings, ILM animated these to emerge from under costumes, bursting as they unfold, rather than showing them emerging directly from the body. Over 5,100 feathers per monkey had to be groomed. Shirk notes that staging was handled carefully so feathers grew and multiplied across bodies while never being shown emerging from skin.issue #26 WickedFor shots of the monkeys taking flight, ILM first collected reference. Eagles and owls were primary sources of flight and takeoff/landing inspiration, advises Shirk. A major obstacle was that rather than the wings growing from shoulders as they do with birds, ours grew from the middle of the back, creating an especially tricky challenge in making natural-looking flight movement. Many motion tests were produced to refine the look of their flight and even though our monkeys had full heavy limbs, and, eventually, cumbersome armor as well, the director wanted their entire body to feel engaged during flight, so limbs never hung or dragged. When in full flight, the legs are played lightly and have a strong secondary dynamic reminiscent of a tail while the arms have a sort of pump, staying engaged and feeling like the shoulders are helping to motivate the wing action.The post The making of Chistery and the monkey guards from Wicked appeared first on befores & afters.
    0 Comments ·0 Shares ·150 Views
  • Mocap actor and VFX supervisor: the Better Man Q&A
    beforesandafters.com
    Wt FX visual effects supervisor Luke Millar and motion capture performer Jonno Davies discuss bringing Robbie Williams to life.Wt FX is well-known for its close collaborations on projects with motion capture performersthink Andy Serkis, Toby Kebbell and a long line of other actors who don motion capture suits and HMCs for a role, with the VFX team translating that performance into a CG creature.Its a task Wt FX carried out once again for Michael Graceys Better Man, this time taking the original on-set performance of actor Jonno Davies through to an ape version of Robbie Williams.Here, visual effects supervisor Luke Millar and Jonno Davies tell befores & afters what that partnership was like, the toughest scenes from on-set and in post, and how they crafted the more intimate sequences in the movie.b&a: Luke, certainly Weta FX has such a vast experience in performance capture, but what kinds of conversations did you have early on about the best way on set to bring Robbie to life as a digital ape?Luke Millar: Before we started shooting principal photography, I arranged to sit down with Michael Gracey and all of the films department heads to provide an overview on what would be involved in the VFX process and how it might influence everyone elses job. Wt FX has very robust systems that we can setup pretty much anywhere and capture performance data on set, on location even during live concerts! However, most of the other departments on this film had never worked with it before. From hair and makeup applying dots to Jonnos face each day to costumes providing proxy mesh clothing that Jonno could interact with but that we could still capture through. We dont work in isolation and so having the collaboration of all involved really helped with bringing Robbie to life! b&a: Jonno, how did you actually come on board Better Man? And, what was your first memory of seeing what you had done on set be translated to a digital Robbie ape, even if something very early?Jonno Davies: Kate Mulvany, who plays my Mum in the film, recommended me to Michael. Wed worked together on the Amazon series Hunters a few years back, no actual scenes together but just got on really well. Production were struggling to find their Rob and she showed them my Instagram which had some videos from when I played Alexander DeLarge in A Clockwork Orange on stage in New York. It was a vastly different interpretation to the Kubrick film, sort of physical theatre meets Gladitorial peacocking, which thankfully piqued Michaels interest. From there, MG pitched the film and showed me some pre-vis including Feel, Let Me Entertain You and My Way, and even from those basic renderings I knew that he and Wt FX were onto something special. I then auditioned over zoom with him and co-writer Simon Gleeson over the next few days, basically workshopping ideas and thankfully the role ended up being mine!Cut to about a year later when I first saw a digital ape standing in my place and I was blown away. It was actually quite an emotional moment. I think part of me always worried that Id just be used as a reference and my performance would get lost in the wonderment of it all, but seeing myself in that chimp my expressions, ad-libs etc, and then combining that with such artistry from Wt, it was very special.b&a: Luke, what was the Weta FX footprint for capturing Jonno on set? In terms of cameras, mocap gear, other measurements/survey etc?Luke Millar: VFX had, by some margin, the largest department on set! We had a VFX team of 6 for capturing LIDAR, set reference, wrangler notes and HDRIs. Five witness camera operators, three Pas, and then a team of eight purely to manage the mocap work. I would shoot GoPro videos from tech scouts and then brief the team on the scenes so that they could rig sets/locations the day before to ensure we had full coverage of the space. The system has a 3mx3m scaling volume and then around 4-5 carts that we would have to come everywhere with us! Its funny because all we are doing is collecting data at that point. By the time shooting wrapped, everyone was celebrating finishing the show and we were only just starting!b&a: Jonno, what was your prep process like for this? Can you break down how you got into a Robbie mindset in terms of consulting reference and then actual conversations with Robbie and Michael?Jonno Davies: I was brought on really last minute, I think I landed in Melbourne about 8 days before we started shooting, so you can imagine that week was crazy: from rehearsals to choreography, as well as tech prep with Wt FX like facial scans etc.Theres an acting technique I use for a lot of my work called Laban, its a brilliant way to explore how a persons character influences how they move and vice-versa, so I started from there and then added Robs idiosyncrasies.I think I kept YouTube afloat during that time, just cramming in as many of Robs performances and interviews as I could, studying how his voice (his accent and pitch really shifted between 15-30), his physicality and energy changed over time. But it was really important for me to see what hes like when the cameras arent rolling, and luckily Rob was really giving with his time and allowed me to see that difference between Robert the human and Robbie the popstar. b&a: Can you both talk about the Regent Street Rock DJ sequence? The energy in that sequence is just amazingwhat was that experience like with such a long rehearsal time, and also limited time each night for the shoot? Jonno Davies: Absolutely wild. What I loved about Rock DJ was that it was one of the rare musical numbers where Robbie isnt plagued by his demons, so I was allowed to really enjoy myself and properly soak in the spectacle of what we were collectively trying to achieve.As you say, there was a limited time each night, plus its not like we could just add another day at the end of the shoot if we didnt have everything we needed. Thats why rehearsals were so extensive, the muscle-memory needed to be second nature by the time we reached set, and thats not just for main cast and dancers, it includes the camera department too, they had their own choreo to stick to.That sort of militant prep really instilled a confidence in us though and allowed us to let rip on for every take.Luke Millar: So much prep went into Rock DJ! We previsd, techvisd, shootvisd, re-techvisd and then rehearsed. By the time we were on that street, I have never felt more ready but you never know what will happen. We wanted as few wipes as possible and never an obvious extra walking closely past camera, so that required many takes to get things as tight as possible. The tricky thing was we had to shoot it in order as we need to join onto the the previous nights work. The downside to a oner is, if we didnt manage to nail one piece then none of it would work! Having an onset editor was essential as we could capture takes live and cut them over the previs to ensure that our timing and camera work was spot on. That said, we still had to use pieces from 36 plates to stitch the whole thing together! If anyone is contemplating trying to shoot a musical number with 5 synchronized mobility scooters, DONT! They are the most temperamental things ever!b&a: The concert and dance moments are incredible, but I also love more intimate scenes, such as Robbies time with Nan. Can you both discuss how making these types of moment differed from the much larger ones?Jonno Davies: Yeah, these moments are so important, theyre what make the ape feel innately human. Plus its those sort of cherished relationships that people can relate to. We had a lot more stillness in these type of scenes and when you pair that with the fact theres no microphone or grand performance to hide behind, it suddenly becomes very vulnerable and exposed. Thats when Michael and the camera get properly up-close to Rob, and you can really appreciate not just the fragility of whats going on his mind, but also the incredibly nuanced work of what the artists at Wt FX have achieved.Luke Millar: I was always acutely aware of the intimacy and sensitivity behind some of the scenes and so for me, my biggest concern was whether any of our gear would affect those moments. Jonno wore a dual mounted face cam and helmet but if he needed to get close, it would be in the way. Robbie is the only digital character in the shot so we couldnt compromise any other performance in the frame. This meant a lot more work from the animation team to replicate the subtlety and nuance in Jonnos performance, however once it clicks into place everything works. b&a: Jonno, do you have any specific advice youd give Luke about his own motion capture appearances in the film, i.e. things he did well or could even do better ?Jonno Davies: If this all goes tits up, Luke would make an excellent bus driver. I feel like he really committed to the character.b&a: What was the hardest scene for both of you to perform and execute?Jonno Davies: Probably Land of a Thousand Dances, which is the montage sequence that follows Robbies meteoric rise to solo stardom. Theres a specific section where we show a duet that he did with Tom Jones at the Brit Awards and Ashley Wallen (choreographer) wanted us to go like-for-like with the movement. You can tell that Rob was absolutely wired during this performance, so I obviously had to recreate that take after take.I remember this very specific moment during that shoot when the dynamics shifted: I went from this adrenal glee of entertaining our hundreds of extras, feasting off the buzz of the crowd, to suddenly hitting around take 15 and realising that the adrenaline was wearing off, and was running on fumes knowing we have probably another 15 angles to shoot. It brought a sort of fight-or-flight sensation and gave me a greater understanding and respect for what Rob went through back then.Luke Millar: Definitely Shes The One. Close interaction with Robbie is by far the hardest work and the dance in Shes The One is nothing but close interaction! Robbie has longer arms than a human and so all of those contact points have to be reworked to fit. We need accurate 3D representation of Nicole so that when she touches Robbie, his hair and clothing move in sync with the plate and there is no other way to do this but a lot of back and forth between Animation and Simulation.We also had some complex match cuts and transitions which needed massaging together as well as some insanely fluid camera moves that required parts of the boat set removing and then replaced with digital versions in post. It was also our only real bluescreen scene in the film too, so we had to extend the boat, create a digital environment and then blend that into a cyclorama that I shot on the Cte dAzur. Even the neighboring boats have CG dancing New Years partygoers on them! The amount of detail is really incredible.The post Mocap actor and VFX supervisor: the Better Man Q&A appeared first on befores & afters.
    0 Comments ·0 Shares ·153 Views
  • Dune: Part Two wins BAFTA for Special Visual Effects
    beforesandafters.com
    The BAFTA for Special Visual Effects was awarded to Paul Lambert, Stephen James, Gerd Nefzer and Rhys Salcombe for Dune: Part Two.Check out all the coverage of Dune: Part Two at befores & afters here.issue #23 Dune: Part TwoThe post Dune: Part Two wins BAFTA for Special Visual Effects appeared first on befores & afters.
    0 Comments ·0 Shares ·155 Views
  • The river and 1.2 petabytes of disk space
    beforesandafters.com
    How the emotional Raka scene from Kingdom of the Planet of the Apes was made. An excerpt from befores & afters magazine.Raka, Noa and Mae are confronted by Proximus muscle at a river crossing in Wes Balls Kingdom of the Planet of the Apes. The FX simulations in that sequence would require 1.2 petabytes (1.2 billion megabytes) of disk space. To build up that scene, animation started with the original performance capture, some of which was carried out in partial sets with flowing water.Our work was a lot of back and forth with what shots retained plate elements for the water and what would ultimately be CG, relates animation supervisor Paul Story. We would be trying to keep as much of what the plate performance was there.A great example of that is a close-up shot where Raka pushes Mae up out of the water, advises VFX supervisor Erik Winquist. Thats the performance take. Special effects provided us with a small river tank that had this current flowing that they could control. Freya and Peter are there in that river flow. We were able to use Freya, paint out Peter, and replace him with CG, of course, but the water that was actually pushing up against Peters chest in his wetsuit that he was wearing is in the movie. It was great to actually take advantage of the plate water.Although Wt FX had gone through a major R&D phase for Avatar: The Way of Water to develop its Loki state machine for coupled fluid simulations, the river sequence in Kingdom presented some different challenges, in particular, that the water was of a nasty sediment-filled and dirt and debris type, with much surface foam. For the purposes of efficiency and flexibility, we were leaning less on the state machine approach and bringing back in some of the older ways of working on water sims, notes Winquist. One thing wed do is run a primary sim at low-res first that would give Paul and his team a really low-res mesh that they could at least animate to. So for example, theyd translate Peter as Raka, who was sitting in an office chair getting pulled around on a mocap stage, and work that into a very low-res sim. In the meantime, the FX team would direct the flow and get a rough version of that in front of Wes. This essentially involved art directing the current and camera, specifies Winquist. Does the camera dip under for a moment and come back over? What does that mean for having to play water sheeting down the lens? Once we had that art directed river in low-res, animation could go off and start animating apes against that current. Then also our FX team could go in and start looking at up-resing that into a much higher resolution sim.The next steps were a back and forth of animation animated to a low-res mesh. The benefit is that the animation done using the low-res mesh matches well to and integrates with the subsequent high-res fluid simulations, although tweaking is always required. Once these steps occur, the creatures team would take the flow fields of the simulation to affect the hair of the apes.For that were using Loki for the hair of the creatures and water to all interact, says Winquist. Then we take the creature bakes, bring them back into the sim, and then FX has to go in and do a super high-resolution, thin-film simulation against the hairs, because now we need to make sure that were taking into account volume preservation of the water. If they jump out of the water, we also need to show that the water is now starting to drain out of their hair.To help with simulating Rakas complex fur in its wet state, visual effects supervisor Stephen Unterfranz pitched the idea of placing the digital character in the water, running a simulation and seeing what would happen to the fur and then sculpting a bespoke groom just for use when he is in the water. It helped with establishing a characteristic parted hair look on the fur of Rakas arm and body, owing to the pressure of the rushing water.issue #27 Kingdom of the Planet of the ApesInterestingly, Rakas fall into the raging rapids of the river was something Wt FX had to revisit a couple of times, owing to a change in the line the character delivers. It was originally filmed in the Sydney backlot set with the scripted line of The work continues, said to Noa. Wes came to the realization that that was the wrong thing for that character to say as his last words, relates Winquist. The switch to Together. Strong. essentially echoes the Apes. Together. Strong. line from from Caesar, it lands so much harder as the last thing were going to hear from this mentor character.With only around a month before delivery, Macon re-delivered the line by simply recording himself on video on his iPhone. The audio from that is actually whats in the movie, reveals Winquist. It really came down to an animator having to look at what we were seeing from that iPhone footage and put that in there. It was a, My God, this is kind of devastating, this moment.The post The river and 1.2 petabytes of disk space appeared first on befores & afters.
    0 Comments ·0 Shares ·128 Views
  • How the motion capture worked on Better Man
    beforesandafters.com
    Go behind the scenes of the on-set performance capture for the film.Today on the befores & afters podcast, weve got a fun conversation with Wt FX visual effects supervisor Luke Millar and actor Jonno Davies about the film Better Man. Jonno, of course, played Robbie Williams in the film, and for the most part he wore different kinds of motion capture gear on set, with his performance then translated into a Robbie ape by Wt FX.Theres already a fun Q&A with Jonno and Luke at befores & afters, but we go much further in this chat, including breaking down the Rock DJ Regent Street scene, the final My Way sequence and also a more frenetic and intimate moment when Robbie is in his living room with the family. I really enjoyed getting the actor perspective on mocap here, and hearing how Luke and Jonno interacted in terms of VFX.The post How the motion capture worked on Better Man appeared first on befores & afters.
    0 Comments ·0 Shares ·134 Views
  • How the motion capture worked on Better Man
    beforesandafters.com
    Go behind the scenes of the on-set performance capture for the film.Today on the befores & afters podcast, weve got a fun conversation with Wt FX visual effects supervisor Luke Millar and actor Jonno Davies about the film Better Man. Jonno, of course, played Robbie Williams in the film, and for the most part he wore different kinds of motion capture gear on set, with his performance then translated into a Robbie ape by Wt FX.Theres already a fun Q&A with Jonno and Luke at befores & afters, but we go much further in this chat, including breaking down the Rock DJ Regent Street scene, the final My Way sequence and also a more frenetic and intimate moment when Robbie is in his living room with the family. I really enjoyed getting the actor perspective on mocap here, and hearing how Luke and Jonno interacted in terms of VFX.The post How the motion capture worked on Better Man appeared first on befores & afters.
    0 Comments ·0 Shares ·134 Views
  • The river and 1.2 petabytes of disk space
    beforesandafters.com
    How the emotional Raka scene from Kingdom of the Planet of the Apes was made. An excerpt from befores & afters magazine.Raka, Noa and Mae are confronted by Proximus muscle at a river crossing in Wes Balls Kingdom of the Planet of the Apes. The FX simulations in that sequence would require 1.2 petabytes (1.2 billion megabytes) of disk space. To build up that scene, animation started with the original performance capture, some of which was carried out in partial sets with flowing water.Our work was a lot of back and forth with what shots retained plate elements for the water and what would ultimately be CG, relates animation supervisor Paul Story. We would be trying to keep as much of what the plate performance was there.A great example of that is a close-up shot where Raka pushes Mae up out of the water, advises VFX supervisor Erik Winquist. Thats the performance take. Special effects provided us with a small river tank that had this current flowing that they could control. Freya and Peter are there in that river flow. We were able to use Freya, paint out Peter, and replace him with CG, of course, but the water that was actually pushing up against Peters chest in his wetsuit that he was wearing is in the movie. It was great to actually take advantage of the plate water.Although Wt FX had gone through a major R&D phase for Avatar: The Way of Water to develop its Loki state machine for coupled fluid simulations, the river sequence in Kingdom presented some different challenges, in particular, that the water was of a nasty sediment-filled and dirt and debris type, with much surface foam. For the purposes of efficiency and flexibility, we were leaning less on the state machine approach and bringing back in some of the older ways of working on water sims, notes Winquist. One thing wed do is run a primary sim at low-res first that would give Paul and his team a really low-res mesh that they could at least animate to. So for example, theyd translate Peter as Raka, who was sitting in an office chair getting pulled around on a mocap stage, and work that into a very low-res sim. In the meantime, the FX team would direct the flow and get a rough version of that in front of Wes. This essentially involved art directing the current and camera, specifies Winquist. Does the camera dip under for a moment and come back over? What does that mean for having to play water sheeting down the lens? Once we had that art directed river in low-res, animation could go off and start animating apes against that current. Then also our FX team could go in and start looking at up-resing that into a much higher resolution sim.The next steps were a back and forth of animation animated to a low-res mesh. The benefit is that the animation done using the low-res mesh matches well to and integrates with the subsequent high-res fluid simulations, although tweaking is always required. Once these steps occur, the creatures team would take the flow fields of the simulation to affect the hair of the apes.For that were using Loki for the hair of the creatures and water to all interact, says Winquist. Then we take the creature bakes, bring them back into the sim, and then FX has to go in and do a super high-resolution, thin-film simulation against the hairs, because now we need to make sure that were taking into account volume preservation of the water. If they jump out of the water, we also need to show that the water is now starting to drain out of their hair.To help with simulating Rakas complex fur in its wet state, visual effects supervisor Stephen Unterfranz pitched the idea of placing the digital character in the water, running a simulation and seeing what would happen to the fur and then sculpting a bespoke groom just for use when he is in the water. It helped with establishing a characteristic parted hair look on the fur of Rakas arm and body, owing to the pressure of the rushing water.issue #27 Kingdom of the Planet of the ApesInterestingly, Rakas fall into the raging rapids of the river was something Wt FX had to revisit a couple of times, owing to a change in the line the character delivers. It was originally filmed in the Sydney backlot set with the scripted line of The work continues, said to Noa. Wes came to the realization that that was the wrong thing for that character to say as his last words, relates Winquist. The switch to Together. Strong. essentially echoes the Apes. Together. Strong. line from from Caesar, it lands so much harder as the last thing were going to hear from this mentor character.With only around a month before delivery, Macon re-delivered the line by simply recording himself on video on his iPhone. The audio from that is actually whats in the movie, reveals Winquist. It really came down to an animator having to look at what we were seeing from that iPhone footage and put that in there. It was a, My God, this is kind of devastating, this moment.The post The river and 1.2 petabytes of disk space appeared first on befores & afters.
    0 Comments ·0 Shares ·113 Views
  • Mocap actor and VFX supervisor: the Better Man Q&A
    beforesandafters.com
    Wt FX visual effects supervisor Luke Millar and motion capture performer Jonno Davies discuss bringing Robbie Williams to life.Wt FX is well-known for its close collaborations on projects with motion capture performersthink Andy Serkis, Toby Kebbell and a long line of other actors who don motion capture suits and HMCs for a role, with the VFX translating that performance into some kind of CG creature.Its a task Wt FX carried out once again for Michael Graceys Better Man, this time taking the original on-set performance of Jonno Davies through to an ape version of Robbie Williams.Here, visual effects supervisor Luke Millar and Jono Davies tell befores & afters what that partnership was like, the toughest scenes from on-set and in post, and how they crafted the more intimate scenes in the movie.b&a: Luke, certainly Weta FX has such a vast experience in performance capture, but what kinds of conversations did you have early on about the best way on set to bring Robbie to life as a digital ape?Luke Millar: Before we started shooting principal photography, I arranged to sit down with Michael Gracey and all of the films department heads to provide an overview on what would be involved in the VFX process and how it might influence everyone elses job. Wt FX has very robust systems that we can setup pretty much anywhere and capture performance data on set, on location even during live concerts! However, most of the other departments on this film had never worked with it before. From hair and makeup applying dots to Jonnos face each day to costumes providing proxy mesh clothing that Jonno could interact with but that we could still capture through. We dont work in isolation and so having the collaboration of all involved really helped with bringing Robbie to life! b&a: Jonno, how did you actually come on board Better Man? And, what was your first memory of seeing what you had done on set be translated to a digital Robbie ape, even if something very early?Jonno Davies: Kate Mulvany, who plays my Mum in the film, recommended me to Michael. Wed worked together on the Amazon series Hunters a few years back, no actual scenes together but just got on really well. Production were struggling to find their Rob and she showed them my Instagram which had some videos from when I played Alexander DeLarge in A Clockwork Orange on stage in New York. It was a vastly different interpretation to the Kubrick film, sort of physical theatre meets Gladitorial peacocking, which thankfully piqued Michaels interest. From there, MG pitched the film and showed me some pre-vis including Feel, Let Me Entertain You and My Way, and even from those basic renderings I knew that he and Wt FX were onto something special. I then auditioned over zoom with him and co-writer Simon Gleeson over the next few days, basically workshopping ideas and thankfully the role ended up being mine!Cut to about a year later when I first saw a digital ape standing in my place and I was blown away. It was actually quite an emotional moment. I think part of me always worried that Id just be used as a reference and my performance would get lost in the wonderment of it all, but seeing myself in that chimp my expressions, ad-libs etc, and then combining that with such artistry from Wt, it was very special.b&a: Luke, what was the Weta FX footprint for capturing Jonno on set? In terms of cameras, mocap gear, other measurements/survey etc?Luke Millar: VFX had, by some margin, the largest department on set! We had a VFX team of 6 for capturing LIDAR, set reference, wrangler notes and HDRIs. Five witness camera operators, three Pas, and then a team of eight purely to manage the mocap work. I would shoot GoPro videos from tech scouts and then brief the team on the scenes so that they could rig sets/locations the day before to ensure we had full coverage of the space. The system has a 3mx3m scaling volume and then around 4-5 carts that we would have to come everywhere with us! Its funny because all we are doing is collecting data at that point. By the time shooting wrapped, everyone was celebrating finishing the show and we were only just starting!b&a: Jonno, what was your prep process like for this? Can you break down how you got into a Robbie mindset in terms of consulting reference and then actual conversations with Robbie and Michael?Jonno Davies: I was brought on really last minute, I think I landed in Melbourne about 8 days before we started shooting, so you can imagine that week was crazy: from rehearsals to choreography, as well as tech prep with Wt FX like facial scans etc.Theres an acting technique I use for a lot of my work called Laban, its a brilliant way to explore how a persons character influences how they move and vice-versa, so I started from there and then added Robs idiosyncrasies.I think I kept YouTube afloat during that time, just cramming in as many of Robs performances and interviews as I could, studying how his voice (his accent and pitch really shifted between 15-30), his physicality and energy changed over time. But it was really important for me to see what hes like when the cameras arent rolling, and luckily Rob was really giving with his time and allowed me to see that difference between Robert the human and Robbie the popstar. b&a: Can you both talk about the Regent Street Rock DJ sequence? The energy in that sequence is just amazingwhat was that experience like with such a long rehearsal time, and also limited time each night for the shoot? Jonno Davies: Absolutely wild. What I loved about Rock DJ was that it was one of the rare musical numbers where Robbie isnt plagued by his demons, so I was allowed to really enjoy myself and properly soak in the spectacle of what we were collectively trying to achieve.As you say, there was a limited time each night, plus its not like we could just add another day at the end of the shoot if we didnt have everything we needed. Thats why rehearsals were so extensive, the muscle-memory needed to be second nature by the time we reached set, and thats not just for main cast and dancers, it includes the camera department too, they had their own choreo to stick to.That sort of militant prep really instilled a confidence in us though and allowed us to let rip on for every take.Luke Millar: So much prep went into Rock DJ! We previsd, techvisd, shootvisd, re-techvisd and then rehearsed. By the time we were on that street, I have never felt more ready but you never know what will happen. We wanted as few wipes as possible and never an obvious extra walking closely past camera, so that required many takes to get things as tight as possible. The tricky thing was we had to shoot it in order as we need to join onto the the previous nights work. The downside to a oner is, if we didnt manage to nail one piece then none of it would work! Having an onset editor was essential as we could capture takes live and cut them over the previs to ensure that our timing and camera work was spot on. That said, we still had to use pieces from 36 plates to stitch the whole thing together! If anyone is contemplating trying to shoot a musical number with 5 synchronized mobility scooters, DONT! They are the most temperamental things ever!b&a: The concert and dance moments are incredible, but I also love more intimate scenes, such as Robbies time with Nan. Can you both discuss how making these types of moment differed from the much larger ones?Jonno Davies: Yeah, these moments are so important, theyre what make the ape feel innately human. Plus its those sort of cherished relationships that people can relate to. We had a lot more stillness in these type of scenes and when you pair that with the fact theres no microphone or grand performance to hide behind, it suddenly becomes very vulnerable and exposed. Thats when Michael and the camera get properly up-close to Rob, and you can really appreciate not just the fragility of whats going on his mind, but also the incredibly nuanced work of what the artists at Wt FX have achieved.Luke Millar: I was always acutely aware of the intimacy and sensitivity behind some of the scenes and so for me, my biggest concern was whether any of our gear would affect those moments. Jonno wore a dual mounted face cam and helmet but if he needed to get close, it would be in the way. Robbie is the only digital character in the shot so we couldnt compromise any other performance in the frame. This meant a lot more work from the animation team to replicate the subtlety and nuance in Jonnos performance, however once it clicks into place everything works. b&a: Jonno, do you have any specific advice youd give Luke about his own motion capture appearances in the film, i.e. things he did well or could even do better ?Jonno Davies: If this all goes tits up, Luke would make an excellent bus driver. I feel like he really committed to the character.b&a: What was the hardest scene for both of you to perform and execute?Jonno Davies: Probably Land of a Thousand Dances, which is the montage sequence that follows Robbies meteoric rise to solo stardom. Theres a specific section where we show a duet that he did with Tom Jones at the Brit Awards and Ashley Wallen (choreographer) wanted us to go like-for-like with the movement. You can tell that Rob was absolutely wired during this performance, so I obviously had to recreate that take after take.I remember this very specific moment during that shoot when the dynamics shifted: I went from this adrenal glee of entertaining our hundreds of extras, feasting off the buzz of the crowd, to suddenly hitting around take 15 and realising that the adrenaline was wearing off, and was running on fumes knowing we have probably another 15 angles to shoot. It brought a sort of fight-or-flight sensation and gave me a greater understanding and respect for what Rob went through back then.Luke Millar: Definitely Shes The One. Close interaction with Robbie is by far the hardest work and the dance in Shes The One is nothing but close interaction! Robbie has longer arms than a human and so all of those contact points have to be reworked to fit. We need accurate 3D representation of Nicole so that when she touches Robbie, his hair and clothing move in sync with the plate and there is no other way to do this but a lot of back and forth between Animation and Simulation.We also had some complex match cuts and transitions which needed massaging together as well as some insanely fluid camera moves that required parts of the boat set removing and then replaced with digital versions in post. It was also our only real bluescreen scene in the film too, so we had to extend the boat, create a digital environment and then blend that into a cyclorama that I shot on the Cte dAzur. Even the neighboring boats have CG dancing New Years partygoers on them! The amount of detail is really incredible.The post Mocap actor and VFX supervisor: the Better Man Q&A appeared first on befores & afters.
    0 Comments ·0 Shares ·144 Views
More Stories