Befores & Afters
Befores & Afters
A brand new visual effects and animation publication from Ian Failes.
3 people like this
203 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • Issue #26 of befores & afters in PRINT is on the VFX of Wicked!
    beforesandafters.com
    In-depth and with a ton of before and after images.Issue #26 is out and its a full issue on Wicked.It dives deep into key moments in Munchkinland, Shiz University and Emerald City, the making of Dr. Dillamond, developing the Wizards monkey guards, and the complex Defying Gravity broom riding finale featuring Elphaba. The main VFX studios Industrial Light & Magic and Framestore are showcased, as are the films practical special effects.The mag includes in-depth interviews with VFX supervisor Pablo Helman (from ILM), Framestore VFX supervisor Jonathan Fawkner, ILM animation supervisor David Shirk, and special effects supervisor Paul Corbould.Plus, its FILLEDwith a ton of before and after imagery.Find issue #26 at your local Amazon store:USAUKCanadaGermanyFranceSpainItalyAustralia JapanSwedenPolandNetherlandsThe post Issue #26 of befores & afters in PRINT is on the VFX of Wicked! appeared first on befores & afters.
    0 Comments ·0 Shares ·59 Views
  • VFX without limits: How Baked Studios revamped its hybrid workflow with file streaming on Suite
    beforesandafters.com
    VFX artists are modern-day magicians in the film & television industry. Crafting worlds that take audiences beyond the confines of reality, blending technical expertise with a creative touch, every VFX professional walks a tightrope between expression and precision.For the team atBaked Studios, a boutique VFX shop based in New York City and Los Angeles, efforts are focused on invisible effects, meticulous edits like blue-screen replacements and set extensions that blend seamlessly into the final cut. Despite their covert artistry, the creatives at Baked have become renowned for their ability to layer nuanced details and bring artistic visions to life, providing a range of VFX post production for some of the biggest film companies and streaming platforms.Supporting this intricate, data-heavy work, however, takes a different skillseta keen sense of workflows and media management. Without a fast, secure, and reliable way to share assets between collaborators, creating high-quality VFX becomes a cumbersome task.Choosing a hybrid workflow, which includes a mix of on-premise and remote collaboration, Bakeds creative output is rooted in the talent of its team, but also the effectiveness of its processes. With main offices in New York City and Los Angeles, and annex locations in Montana and soon Atlanta, the workflow supervisors at Baked chose to recently revamp their approach.Now, lets put Baked Studios current workflow under the microscope and uncover how a cloud storage solution, powered by Suite, has simplified operations, made it possible to collaborate across vast distances, and deliver fine-tuned visuals with superior efficiency.Managing complex, distributed VFX workflows, Bakeds existing on-prem infrastructure provided instant connectivity to mediabut that stopped at the front door. With many artists working remotely, coordinating projects between Bakeds various on-prem and remote locations complicated operations, as did ensuring consistent access to critical configuration files. So, the initial challenge was to take a well-oiled on-prem workflow and connect it to the cloud for real-time remote collaboration between key players.Cameron Target, VFX pipeline manager at Baked, spearheaded the process and tapped Ricardo Musch, a London-based VFX pipeline consultant at Nodes & Layers, to hatch a clever plan. To use Suites cloud storage as a mounted drive location integrated directly with Bakeds Flow Production Tracking database (formerly ShotGrid), on-prem servers, and render farms.Flow just needs a central storage location; it asks you to point to a mounted drive, any disk location, Target explains. We can point it directly at Suite, and Flow doesnt distinguish Suite as different from any other mounted drive.The team then simply relies on shared file-pathing guidelines to ensure everyone on a project organizes media correctly. Not surprisingly, the process is simple, so when Target brought the idea to the rest of the team, eyes widened. Bakeds leadership understood how it workedand the benefitsimmediately.We built the tool so that you can either share media directly to Suite, or to Bakeds local storage in New York, says Musch. VFX studios generate a lot of dataprojects can grow to 40-50 terabytes. We split up hot assets that we need readily available, and we store those all on Suite. Everything else stays local.For George Loucas, Founder & VFX Supervisor at Baked, greenlighting the use of Suites cloud storage became a no-brainer once the team tested the integrated workflow and experienced the advantages.Suite has been widely embraced, and that says a lot, Loucas says. For our team to see the benefits and the efficiency thats music to a business owners ears. George Loucas, Founder & VFX Supervisor, Baked StudiosTaking it a step further, Target explains how the system works by using templates within Flow Production Tracking, allowing media to be easily distributed between the two storage locations.Flow uses tokens linked to existing information in the database, Target says. We just added an extra token called Storage Location into our template; now there are two options for our editors to pick[store files] in Suite or on-prem. Every file in our system has a corresponding file path that dictates where it is kept. Its exciting to use Suite as mounted storage that fits into this industry-standard platform; Suite allows us to have a central link, so we can continue using our familiar workflow, but through a cloud platform that operates just the same.Storing all active media on Suites cloud storage not only simplifies but unifies the teams efforts. On-premise employees can utilize the companys powerful infrastructure in tandem with Suite while remote artists get real-time connectivity to assets as if they were working in-office.At any given time, 25+ VFX artists, coordinators, and producers work directly off of Suite, where the team stores configuration templates and all active media. In turn, everyone remains connected to the most updated versions of every layer, node, and asset.For Matt Hartle, partner & executive creative editor at Baked, stationed at his office in Montana, that connectivity is a vital benefit of the cloud-based VFX workflow on Suite.When were reviewing files and notice simple things that need a quick adjustment, Im able to hop into the project directly from Suite to make those changes, Hartle says. It makes our studio feel small, like were all sitting in the same room on the project. It just flows seamlessly on Suite.Even with a dispersed workforce, Bakeds supervisors act with confidence as they choose which projects stay entirely in-house and which get placed on the Suite drive. Even when adding new team members with just a few clicks, admins can apply user-specific permissions that limit remote access to certain files and folders, with the ability to revoke access instantly, if needed.You can set up a really professional shop with loads of flexibility really quickly, Target remarks. Suite is the backbone for that. CG artists and compositors also frequently work with specific plugins, often having certain tools & preferences saved locally on their editing machine of choice. On Suite, every one of those creative tools is compatible right out of the box, enabling artists to work the same way they would off of a hard drive, but remain connected to the team from anywhere in the world.Our teams can collaborate with all the plugins they already use, Musch explains. This is an important noteyou can simply point Nuke or any other software to Suite and it just works.Moreover, additional features like Suites CLI compatibility with Linux; a convenient built-in Time Machine feature that can recall files down to the millisecond; and direct-to-cloud external file sharing with Suite Connect also help Bakeds supervisors and creatives shine.Ive tested it on Rocky, the industrys Linux distribution of choice, and it works great. The CLI-compatibility makes it really easy to deploy changes and throttle actions across our network, Target says. You can also edit a Nuke script, save it to Suite, and maintain version control through Time Machine, meaning you can easily roll back to an earlier version of that script if you need to reference it. Having a shared space for configurations taps into one of Suites cool central source of truth ideas. Anyone in New York or Los Angeles can pull the updated files. Having that flexibility and access from everywhere its insane. Theres no easier way to do it.From everyday, granular efficiencies to overarching benefits that affect top execs and project managers, Bakeds revamped cloud-based VFX workflow on Suite highlights how streamlined collaboration can lead to accelerated creative results. Better yet, success often pairs with growth, and Suite makes it effortless to scale without the usual unnerving overhead.Were currently building out our operation in Atlanta and Suite makes that less daunting, says George Loucas, founder & VFX supervisor at Baked. We can dip our toes in, without a large commitment to on-prem costs upfront. It lets us build out a small team and provide access to data quickly, instead of having to commit to racks of storage, servers, and everything else that comes with a fully configured on-prem setup. Were excited about these new opportunitiesthese technological advances are allowing everything to run better, upping the game.As a full service, mid-size VFX agency, Baked Studios represents a large swath of the visual effects industry, a sector of independent shops using the latest technology to produce the best content possible. Lastly, VFX projects arent getting any simpler, and its the teams finding efficiencies on the cutting-edge that will continue to thrive, take on bigger projects, and help define the next chapter of visual effects in film and television.Suite has enabled us to do things we didnt think we could, Target says. We can work with our on-premise servers at the core, but easily expand. Theres no easier way to do it.Brought to you by Suite Studios:This article is part of the befores & afters VFX Insight series. If youd like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.The post VFX without limits: How Baked Studios revamped its hybrid workflow with file streaming on Suite appeared first on befores & afters.
    0 Comments ·0 Shares ·31 Views
  • The wide use of machine learning VFX techniques on Here
    beforesandafters.com
    Up-rezing, ML for transitions, and ageing and de-ageing were all part of the mix.Today on the befores & afters podcast were chatting to VFX supervisor Kevin Baillie about the Robert Zemeckis film, Here. You may have already seen I did an entire issue of the print magazine on Here which you can grab now. There we covered the machine learning de-ageing work in this film for actors like Tom Hanks and Robin Wright, plus the virtual production and visual effects work, which also included some ML approaches.Well, I was so taken really with the art and tech in Here that I wanted to share with everyone my original chat with Kevin where he breaks all of that down in detail. I really think Here is a great example of what new tech exists out there and how it can be used in storytelling. I mean, even since they made this film, things have advanced so much in the area. But, listening to Kevin talk about how to solve particular issues is always a good way to hear about how VFX supes tackle big ticket items on films.This episode is sponsored by Suite Studios. Ready to accelerate your creative workflow? Suites cloud storage is designed for teams to store, share, and edit media in real-time from anywhere. The best part? With Suite, you can stream your full-resolution files directly from the cloud without the need to download or sync media locally before working. Learn more about why the best creative teams are switching to Suite at suitestudios.ioThe post The wide use of machine learning VFX techniques on Here appeared first on befores & afters.
    0 Comments ·0 Shares ·58 Views
  • Part two of the VFX Notes breakdown of The Phantom Menace is now here
    beforesandafters.com
    A further deep dive celebrating the achievements in Episode I.In this episode of VFX Notes, Ian and Hugo continue to dive into Star Wars: The Phantom Menace, celebrating its 25th anniversary. This is part two of our discussion on the film. In the first episode, we reviewed the film and discussed the fan reaction to Jar Jar Binks, the digital revolution, and many other groundbreaking innovations. In part two, we go deep into individual sequences from the film, including the pod race, the CGI characters, the miniatures, and much more.The post Part two of the VFX Notes breakdown of The Phantom Menace is now here appeared first on befores & afters.
    0 Comments ·0 Shares ·47 Views
  • A mix of old and new
    beforesandafters.com
    The stop motion animation and VFX tech used on Wallace & Gromit: Vengeance Most Fowl. An excerpt from issue #25 of befores & afters print magazine.In A Grand Day Out, released in 1989, director Nick Park introduced us to Wallace and Gromit. The beloved human and dog characters were animated in clay. Decades later, and on the Aardman film Wallace & Gromit: Vengeance Most Fowl (which Park directed with Merlin Crossingham), the characters are still brought to life in clay.However, several other aspects of the film now relied on some of the latest 3D printing, camera and lighting, and visual effects technologies. Here, Aardman supervising animator and stop motion lead Will Becher shares with befores & afters the range of old and new tech used to make the film.b&a: The way you made this film seems to incorporate so many old-school pieces of animation technology, and the latest tech, too.Will Becher: Yeah, I mean, it is a theme in the film as wellthe old-school and new-school tech. What weve found on every project is theres always another version. So were always getting the newest cameras and were always updating the software that we use. But in terms of model making and the art department, 3D printings become a really big part of it because its fantastic for sort of micro-engineering and testing things.In the film, we have the Norbot gnome character. We still start with a clay sculpt and then we can scan that sculpt in and we can build the internal mechanism design to the millimeter on computer using a 3D model. Engineering elements that sit together in a very small space takes a long time and lots of filing and fiddling. With Norbot, we had a 3D printed head, and we 3D printed the mechanics inside the head. So the mouth and the way it moves, its all 3D printed, it slots together.In terms of animation, the process is very similar to how it was when Nick Park started. Were still using the process of physically animating and moving characters frame by frame. The advancements really come with the world around them. So, making the film feel bigger using set extensions or digital matte paintings for the skies.b&a: Nortbot was 3D printed, in part, but did you consider animating him with replacement animation?Will Becher: No, the reason we used 3D printing was to make sure we could make him as something solid. We could have used it for replacement animation. In fact, we thought about it, When he walks, when he marches, would it be better to print? But actually, funnily enough, because everything is so organic in the world, the floor of the sets, its not perfectly flat. So as soon as you have anything like that, you actually need articulation. So Norbot is printed in his head, but the rest of him, although some of the internal mechanisms are printed, he has a skeleton inside and he has silicon dressing on top. And all the animators then, they manipulate him by hand, they move him around.Hes just a good example of a very small version of a very well articulated puppet. So he could do a lot more than you see in the film. Hes got the most complicated sort of skeleton, really, because he has to be quite versatile. And the one puppet we make has to work for every shot in the film.b&a: Theres an army of Norbots that appear. How did you approach doing so many?Will Becher: We have PPMs for every sequence, and we spent a bit of time talking about, Okay, how are we going to do it? Were going to shoot separate plates for each one, but weve got to get them to look the same. And as soon as we said we want them to be exactly the same, the way they move, because theyre an army Thats the other thing, stop motion is organic. You cant repeat it because you are physically moving things in space, and the lens and the lighting, everything is organic.So it was our visual effects supervisor Howard Jones, he said, Okay, if you wanted to repeat, then maybe what we could do is actually we could shoot the Norbots, just one row, and then we can have the camera move back into different positions to effectively give us the perspective so that we could then paste that behind.So we tried this out. It was like, Can we do that? Can we do an individual frame and then shoot several plates with the camera in different places? We couldnt because actually the characters just look wrong because the lighting doesnt change. So then we had to design this rig that basically would move them, slide them back, take a frame, slide them back, really complex, and then stitch them together in post.But what I love is that we could have tried to build CG models, but actually within our scope, within the budget, we didnt have any CG characters. We couldnt, and it wouldve been very expensive to actually make a CG Norbot that would hold up on screen that close. So everything we shot with the Norbots, we shot for real with the actual puppets.b&a: Feathers McGraw, the penguin, returns in this film. He seems like a very simple puppet build, but is that the case?Will Becher: Well, the actual shape of the face looks very simple. Feathers is literally like a bowling pin on legs. Thats how we have described him in the early days. And the original puppet, actually, it was the same size, same height, he looked the same. He probably just didnt look quite as advanced inside him. Thats the bit now we would 3D print. For the surface and the wings, its all just clay. And even wire, we still use wire because actually its really hard to get miniature articulated joints inside.Whats also new is the use of silicone. We used to use a lot more foam latex but foam over time just dries out, cracks. For Wallace and for Gromit, the bodies of them are actually silicone. Theyre full of fingerprints, but its a very flexible type of silicone, and it just saves us time focusing on things like the performance rather than focusing on cleaning up a joint. And thats the benefit of the newer technology.b&a: When youre building sets, what kinds of decisions do you make about how much can be built and what can be DMP? Theres a canal sequence in the film, for example, which seems like a massive build.Will Becher: Thats a really key example of the fusion of tech because the art director, Matt Perry, hes excellent. Hes really resourceful at building stuff on set, in person for real. But also he really wants it to feel big and advanced. So, what wed do is figure out how build it in sections. Hell build a section and say, Okay, Nick and Merlin, I think we need to build this much of it, and the rest of it well scan and well create as DMP.That means, theres a section of the canal for real, the actual boats, which are also real physical things they can fit in. And then its extended out. To do the whole thing in-camera, we would have needed a massive space and a huge amount of time as well to paint all those bricks. I think we ended up with two sections, two actual archways, and from that, we can shoot loads of plates and they can scan it.b&a: Have other technologies for shooting changed or come into play much in recent years, say with motion control or camera rigs?Will Becher: Well, theres a shot in the film where the camera goes up the staircase. Its funny the things you dont necessarily anticipate that are going to be a pain. None of our camerasour digi stills camerascould possibly get close enough because theyre too big. Theyre all high-end digital stills cameras. And so we had to test and mount lots of different smaller digital cameras on the end of a crane and try and get as close to the set as possible because we really wanted to create that camera move for real, traveling up the staircase.So Id definitely say the cameras have changed, but also the lights have become smaller and smaller. And with the lighting, were quite often putting tiny, practical lights in there for a candle flame or something. Theyre really advanced so we can program them to flicker and theyre so small we can hide them behind props. Theyre lower temperature as well. So for the animators, it used to be quite hot work. If you were in a unit with a couple of massive 5K or 10K lights, compared to today with LEDs, well, its a huge change.issue #25 animation conversationsb&a: I guess some of the other new technology involves using visual effects and CG for things like water. But still, I imagine to keep that Aardman and animated look, how do you ensure in your role that thats maintained?Will Becher: Theres a scene at the beginning where Gromit gets milk poured all over him. We tried all sorts of things, but when you get into particle effects, it gets really difficult. And so things like mist, fog, smoke, fire, milkit turns out we can never stop it looking the scale it is. So we tried milk and it just looks too thick. We tried lots of different materials. So in the end, the milk is a bit of a hybrid. We have it pouring out of the jug. We might use actual modeling clay. But then as soon as it hits Gromit, in this case, it turns into a CG effect.For water, we did a couple of things. Firstly we found this amazing stuff, this clear resin that you can sculpt. You sculpt this resin and you basically cure it with UV light, it goes hard. So its totally see through. So this is a new thing. Weve only been using it for a few years. Its fantastic. But you cant ever create a lake or an ocean that interacts with the characters. So theres a whole scene in this film where Wallace is in the water. And for that, we actually applied what looked like water to the puppet so that he was wet above water, but all the actual surface of the water is CG.Our directors, Nick and Merlin, neither of them are scared of using CG. They use it all the time, but theyll use it where it makes the film better, not for the sake of it. Also, we wont do stuff in camera for the sake of it. If it doesnt look good, well go to the best tools for the job.Read the full issue of the magazine.The post A mix of old and new appeared first on befores & afters.
    0 Comments ·0 Shares ·47 Views
  • Here are all the nominees for the 23rd Annual VES Awards
    beforesandafters.com
    Leading the noms are Dune: Part Two, The Wild Robot, Shgun and The Penguin.The Visual Effects Society has announced the nominees for the 23rd Annual VES Awards. The Awards will be presented on February 11, 2025 at The Beverly Hilton hotel.Special honorees at the 23rd Annual VES Award include: Actor-producer Hiroyuki Sanada, receiving the VES Award for Creative Excellence; Director and VFX supervisor Takashi Yamazaki receiving the VES Visionary Award; and VR/immersive tech pioneer Dr. Jacquelyn Ford Morie receiving the VES Georges Mlis Award.The VES Online View and Vote System will be available at 12:00 AM PST on January 20, 2025 and will close at 11:59 PM PST on February 2, 2025.Here are the nominees:OUTSTANDING VISUAL EFFECTS IN A PHOTOREAL FEATUREBetter ManLuke MillarAndy TaylorDavid ClaytonKeith HerftPeter StubbsDune: Part TwoPaul LambertBrice ParkerStephen JamesRhys SalcombeGerd NefzerKingdom of the Planet of the Apes Erik WinquistJulia NeighlyPaul StoryDanielle ImmermanRodney BurkeMufasa: The Lion KingAdam ValdezBarry St. JohnAudrey FerraraDaniel FotheringhamTwistersBen SnowMark SoperFlorian WitzelSusan GreenhowScott FisherOUTSTANDING SUPPORTING VISUAL EFFECTS IN A PHOTOREAL FEATUREBlitzAndrew WhitehurstSona PakTheo DemirisVincent PoitrasHayley WilliamsCivil War David SimpsonMichelle RoseFreddy SalazarChris ZehJ.D. SchwalmHorizon: An American Saga Chapter 1Jason NeeseArmen FetulagianJamie NeeseJ.P. JaramilloNosferatuAngela BarsonLisa RenneyDavid ScottDave CookPavel SgnerYoung Woman and the SeaRichard BriscoeCarrie RishelJeremy RobertStphane DittooIvo JivkovOUTSTANDING VISUAL EFFECTS IN AN ANIMATED FEATUREInside Out 2Kelsey MannMark NielsenSudeep RangaswamyBill WatralMoana 2Carlos CabralTucker GilmoreIan GoodingGabriela HernandezThe Wild RobotChris SandersJeff HermannJeff BudsbergJacob Hjort JensenTransformers One Frazer ChurchillFiona ChiltonJosh CooleyStephen KingUltraman: Rising Hayden JonesSean M. MurphyShannon TindleMathieu VigOUTSTANDING VISUAL EFFECTS IN A PHOTOREAL EPISODE Fallout; The HeadJay WorthAndrea KnollGrant EverettJoao SitaDevin MaggioHouse of the Dragon; Season 2; The Red Dragon and the GoldDai EinarssonTom HortonSven MartinWayne StablesMike DawsonShgun; AnjinMichael CliettMelody MeadPhilip EngstrmEd BruceCameron WaldbauerStar Wars: Skeleton Crew; Episode 5John KnollPablo MollesJhon AlvaradoJeff CapogrecoThe Lord of The Rings: The Rings of Power; Season 2; EldestJason SmithTim KeeneAnn PodloznyAra KhanikianRyan ConderOUTSTANDING SUPPORTING VISUAL EFFECTS IN A PHOTOREAL EPISODEExpats: HomeRobert BockGlorivette SomozaCharles LabbTim EmeisLady in the Lake; It Has to Do With the Search for the MarvelousJay WorthEddie BoninJoe WehmeyerEric Levin-HatzMike MyersMasters of the Air; Part Three; The Regensburg-Schweinfurt MissionStephen RosenbaumBruce FranklinXavier Matia BernasconiDavid AndrewsNeil CorbouldThe Penguin; BlissJohnny HanMichelle RoseGoran PavlesEd BruceDevin MaggioThe Tattooist of Auschwitz; PilotSimon GilesAlan ChurchDavid SchneiderJames HattsmithOUTSTANDING VISUAL EFFECTS IN A REAL-TIME PROJECT[REDACTED]Fabio SilvaMatthew ShermanCaleb EssexBob KopinskyDestiny 2: The Final ShapeDave SamuelBen FabricEric GreenliefGlenn GambleStar Wars OutlawsStephen HawesLionel Le DainBenedikt PodlesniggBogdan DraghiciWhat If? An Immersive StoryPatrick N.P. ConranShereif FattouhZain HomerJax LeeUntil DawnNicholas ChambersJack Hidde GlavimansAlex GaborOUTSTANDING VISUAL EFFECTS IN A COMMERCIALYouTube TV NFL Sunday Ticket: The Magic of SundayChris BayolJeremy BrooksLane JollyJacob BergmanDisney; Holidays 2024Adam DroyHelen TangChristian Baker-SteeleDavid FleetVirgin Media; Walrus WhizzerSebastian CaldwellIan BerryBen CroninAlex GreyCoca-Cola; The HeroesGreg McKneallyAntonia VlastoRyan KnowlesFabrice FiteniSix Kings Slam; Call of the KingsRyan KnowlesJoe BillingtonDean RobinsonGeorge SavvasOUTSTANDING VISUAL EFFECTS IN A SPECIAL VENUE PROJECTD23; Real-Time RocketEvan GoldbergAlyssa FinleyJason BrenemanAlice TaylorThe Goldau Landslide ExperienceRoman KaelinGianluca RavioliFlorian BaumannMTV Video Music Awards; Slim Shady LiveJo PlaeteSara MustafaCameron JacksonAndries CourteauxTokyo DisneySea; Peter Pans Never Land AdventureMichael Sean FoleyKirk BodyfeltDarin HollingsBert KleinMaya VyasParis Olympics Opening Ceremony; RunBenjamin Le SterGilles De LusigmanGerome ViavantRomain TinturierOUTSTANDING CHARACTER IN A PHOTOREAL FEATUREBetter Man; Robbie WilliamsMilton RamirezAndrea MerloSeoungseok Charlie KimEteuati TemaKingdom of the Planet of the Apes; NoaRachael DunkAndrei CovalJohn SoreNiels Peter KaagaardKingdom of the Planet of the Apes; RakaSeoungseok Charlie KimGiorgio LafrattaTim TeramotoAidan MartinMufasa: The Lion King; TakaKlaus SkovboValentina RosselliEli De KoninckAmelie TalarmainOUTSTANDING CHARACTER IN AN ANIMATED FEATUREInside Out 2; AnxietyAlexander AlvaradoBrianne FranciscoAmanda WagnerBrenda Lin ZhangThe Wild Robot; RozFabio LigniniYukinori InagakiOwen DemersHyun HuhThelma The Unicorn; Vic DiamondGuillaume ArantesAdrien MonteroAnne-Claire LerouxGaspard RocheWallace & Gromit: Vengeance Most Fowl; GromitJo FentonAlison EvansAndy SymanowskiEmanuel NevadoOUTSTANDING CHARACTER IN AN EPISODE, COMMERCIAL, GAME CINEMATIC, OR REAL-TIME PROJECTSecret Level; Armored Core: Asset Management; Mech PilotZsolt VidaPter Krucsaignes VonaEnric Nebleza PaellaDiablo IV: Vessel of Hatred; NeyrelleChris BostjanickJames MaYeon-Ho LeeAtsushi IkarashiDisney; Holidays 2024; OctopusAlex DoylePhilippe MoineLewis PickstonAndrea LacedelliRonja the Robbers Daughter; Vildvittran the Queen HarpyNicklas AnderssonDavid AllanGustav hrenNiklas WallnOUTSTANDING ENVIRONMENT IN A PHOTOREAL FEATURECivil War; Washington, D.C.Matthew ChandlerJames HarmerRobert MooreAdrien ZeppieriDune: Part Two; The Arrakeen BasinDaniel RheinDaniel Anton FernandezMarc James AustinChristopher AnciaumeGladiator II; RomeOliver KaneStefano FarciJohn SeruFrederick ValleeWicked; The Emerald CityAlan LamSteve BevinsDeepali NegiMiguel Sanchez Lpez-RuzOUTSTANDING ENVIRONMENT IN AN ANIMATED FEATUREKung Fu Panda 4; Juniper CityBenjamin LippertRyan PrestridgeSarah VawterPeter MaynezThe Wild Robot; The ForestJohn WakeHe Jung ParkWoojin ChoiShane GladingTransformers One; Iacon CityAlex PopescuGeoffrey LebretonRyan KirbyHussein NabeelWallace & Gromit: Vengeance Most Fowl; AqueductMatt PerryDave Alex RiddettMatt SandersHoward JonesOUTSTANDING ENVIRONMENT IN AN EPISODE, COMMERCIAL, GAME CINEMATIC, OR REAL-TIME PROJECTDune: Prophecy; Pilot; The Imperial PalaceScott CoatesSam BesseauVincent lHeureuxLourenco AbreuDune: Prophecy; Two Wolves; Zimia SpaceportNils WeisbrodDavid AnastacioRene BorstRuben ValenteShgun; OsakaManuel MartinezPhil HanniganKeith MaloneFrancesco CorvinoThe Lord of the Rings: The Rings of Power; Season 2; Doomed to Die; EregionYordan PetrovBertrand CabrolLea DesrozierKaran DhandhaOUTSTANDING CG CINEMATOGRAPHYBetter ManBlair BurkeShweta BhatnagarTim WalkerCraig YoungDune: Part Two; ArrakisGreig FraserXin Steve GuoSandra MurtaBen WiggsHouse of the Dragon; Season 2; The Red Dragon and the Gold; Battle at Rooks RestMatt PerrinJames ThompsonJacob DoehnerP.J. DillonKingdom of the Planet of the Apes ; Egg ClimbDennis YooAngelo PerrottaSamantha ErickstadMiae KangOUTSTANDING MODEL IN A PHOTOREAL OR ANIMATED PROJECTAlien: Romulus; Renaissance Space StationWaldemar BartkowiakTrevor WideMatt MiddletonBen ShearmanDeadpool & Wolverine; Ant-Man ArenaCarlos Flores GomezCorinne DyChris ByrnesGerald BlaiseDune: Part Two; The Harkonnen HarvesterAndrew HodgsonTimothy RussellErik LehmannLouie ChoGladiator II; The ColosseumOliver KaneMarnie PittsCharlotte FargierLaurie PriestOUTSTANDING EFFECTS SIMULATIONS IN A PHOTOREAL FEATUREDune: Part Two; Atomic Explosions and WormridingNicholas PapworthSandy la TourelleLisa NolanChristopher PhillipsKingdom of the Planet of the Apes; Burning Village, Rapids and FloodsAlex NowotnyClaude SchitterFrdric ValleurKevin KelmTwistersMatthew HangerJoakim ArnessonLaurent KermelZheng Yong OhVenom: The Last Dance; Water, Fire & Symbiote EffectsXavi Martin RamirezOscar DahlenHedi NamarYuri YangOUTSTANDING EFFECTS SIMULATIONS IN AN ANIMATED FEATUREKung Fu Panda 4Jinguang HuangZhao WangHamid ShahsavariJoshua LaBrotMoana 2Zoran StojanoskiJesse EricksonShamintha Kalamba ArachchiErin V. RamosThe Wild RobotDerek CheungMichael LosureDavid ChowNyoung KimUltraman: RisingGoncalo CabacaZheng Yong OhNicholas Yoon Joo KuangPraveen BoppanaOUTSTANDING EFFECTS SIMULATIONS IN AN EPISODE, COMMERCIAL, GAME CINEMATIC, OR REAL-TIME PROJECTAvatar: The Last Airbender; Legends; KoizillaIoan BoieriuDavid StopfordPer BalaySaysana RintharamyShgun; Broken to the Fist; LandslideDominic TiedekenHeinrich LweCharles GuertonTimmy LundinStar Wars: Skeleton Crew; Pilot; Spaceship Hillside TakeoffTravis HarkleroadXiaolong PengMarcella BrownMickael RiciottiThe Lord of the Rings: The Rings of Power; Season 2; Shadow and Flame; Balrog Fire and Collapsing CliffKoenraad HofmeesterMiguel Perez SenentMiguel Santana Da SilvaBilly CopleyThree Body Problem; Judgement DayYves DIncauGavin TemplerMartin ChabannesEloi Andaluz FullOUTSTANDING COMPOSITING & LIGHTING IN A FEATUREBetter ManMark McNichollGordon Spencer de HasethEva SnyderMarkus ReithofferDune: Part Two; Wormriding, Geidi Prime, and the Final BattleChristopher RickardFrancesco DellAnnaPaul ChapmanRyan WingKingdom of the Planet of the ApesJoerg BruemmerZachary BrakeTim WalkerKaustubh A. PatilThe Wild RobotSondra L. VerlanderBaptiste Van OpstalEszter OffertalerAustin CasaleOUTSTANDING COMPOSITING & LIGHTING IN AN EPISODEShgun; Broken to the Fist; LandslideBenjamin BernonDouglas RoshamnVictor KirschCharlie RaudStar Wars: Skeleton Crew; Episode 6; JawsRich GrandeTomas LefebvreIan DodmanRey ReynoldsThe Boys; Season 4; Life Among the SepticsTristan ZerafaMike StadnyckyjToshi KosakaRajeev BRThe Penguin; After HoursJonas StuckenbrockKaren ChangEugene BondarMiky GirnOUTSTANDING COMPOSITING & LIGHTING IN A COMMERCIALVirgin Media; Walrus WhizzerSebastian CaldwellAlex GreyKanishk ChouhanShubham MehtaCoca-Cola; The HeroesRyan KnowlesAlex GabucciJack PowellDan YarcigiCorcept; MarionetteYongchan KimArman MatinYoon BaeRajesh KaushikDisney; Holidays 2024Christian Baker-SteeleLuke WarpusPritesh KotianJack HarrisOUTSTANDING SPECIAL (PRACTICAL) EFFECTS IN A PHOTOREAL PROJECTBlitzHayley WilliamsDavid EvesAlex FreemanDavid WatsonConstellationMartin GoeresJohara RaukampLion David BogusLeon MarkThe Penguin; Safe GunsDevin MaggioJohnny HanCory CandrilliAlexandre ProdhommeEMERGING TECHNOLOGY AWARDDune: Part Two; Nuke CopyCatBen KentGuillaume GalesMairead GroganJohanna BarbierFuriosa: A Mad Max Saga; Artist-driven Machine Learning CharacterJohn BastianBen WardThomas RowntreeRobert BeveridgeHere; Neural Performance ToolsetJo PlaeteOriel FrigoTomas KoutskyMatteo Oliviero DancyMufasa: The Lion King; Real-Time Interactive Filmmaking, From Stage To PostCallum JamesJames HoodLloyd BishopBruno PedrinhaThe Penguin; Phase Synced Flash-Gun SystemJohnny HanJefferson HanJoseph MenafraMichael PynnOUTSTANDING VISUAL EFFECTS IN A STUDENT PROJECTDawn (entry from ESMA cole Suprieure Des Mtiers Artistiques)Noah MercierApolline RoyerLorys StoraMarie PradeillesStudent Accomplice (entry from Brigham Young University)Spencer BlanchardLisa BirdAnson SavageKiara SpencerPittura (entry from ARTFX Schools of Digital Arts)Lauriol AdamLassre TitouanVivenza RmiMarre HellosCourage (entry from Supinfocom Rubika)Salom CognonMargot JacquetNathan BaudryLise DelcroixThe post Here are all the nominees for the 23rd Annual VES Awards appeared first on befores & afters.
    0 Comments ·0 Shares ·61 Views
  • ILM breaks down its VFX for Alien: Romulus
    beforesandafters.com
    Go behind the scenes.The post ILM breaks down its VFX for Alien: Romulus appeared first on befores & afters.
    0 Comments ·0 Shares ·77 Views
  • Behind the scenes of The Company We Keep
    beforesandafters.com
    Goodbye Kansas breaks down their Secret Level episode called The Company We Keep.The post Behind the scenes of The Company We Keep appeared first on befores & afters.
    0 Comments ·0 Shares ·101 Views
  • How the Academy Software Foundation works
    beforesandafters.com
    David Morin from the Academy Software Foundation on the VFX software packages the ASWF encompasses.Today on the befores & afters podcast, were chatting to David Morin, who is the Executive Director at the Academy Software Foundation. I wanted to talk to David to do almost a back to basics about the Academy Software Foundation how it started, what its all about, and what software projects it now encompasses.Well, in our conversation, David lays all that out. We also talk about, of course, all the big projects the ASWF now covers, including OpenEXR, OpenVDB and MaterialX for instance, plus new projects that are part of the stable. If youve heard a little about the Foundation, and want to know more, I think this is a great listen.This episode is sponsored by Suite Studios. Ready to accelerate your creative workflow? Suites cloud storage is designed for teams to store, share, and edit media in real-time from anywhere. The best part? With Suite, you can stream your full-resolution files directly from the cloud without the need to download or sync media locally before working. Learn more about why the best creative teams are switching to Suite at suitestudios.ioThe post How the Academy Software Foundation works appeared first on befores & afters.
    0 Comments ·0 Shares ·70 Views
  • If you want a painterly bush, paint the bush
    beforesandafters.com
    How the painterly style of The Wild Robot was realized. An excerpt from issue #25 of befores & afters magazine.For DreamWorks Animations The Wild Robot, based on the book by Peter Brown, writer/director Chris Sanders sought to bring a painterly aspect to the 3D animated film. The studio capitalized on stylized workflows it had developed previously on Puss in Boots: The Last Wish and The Bad Guys to take things even further for The Wild Robot, which follows the service robot Roz (Lupita Nyongo) who is shipwrecked on a wildlife-filled island, and who eventually becomes the adoptive mother of an orphaned goose, Brightbill (Kit Connor).In this excerpt, visual effects supervisor Jeff Budsberg gets into the weeds with befores & afters about exactly how the painterly aspects of the film were made, the tools developed at DreamWorks Animation, and, importantly, what this process added to the storytelling.b&a: Jeff, you really elevated things again with the stylization here, but its actually even a different approach to what Puss in Boots: The Last Wish and The Bad Guys did. What was your brief?Jeff Budsberg: I had come off of The Bad Guys as Head of Look and I was talking to Chris Sanders and Jeff Hermann and, right off the bat, Chris was interested in this space. Computer graphics has been in this pursuit of realism, which has been amazing and theres been so much innovation. But theres something that weve lost along the way, specifically in animation. If you go back to those 40s and 50s animations like Snow White, Bambi, Sleeping Beauty, theres something endearing about feeling the artists hand. I was just watching 101 Dalmatians with my kids this weekend and just feeling the stroke of the drawn line, the imperfections there. Theres something thats magical or just endearing about that. You feel the craft in there. And its not just that. Its also being deliberate about where there is information and where youre guiding the eye. Theres something about that that we really were interested in.issue #25 animation conversationsI think the other part is when I talk to directors and producers about stylized CG films is that really you dont just want to put Spider-Verse or The Bad Guys onto another film. It doesnt work. It doesnt make sense. You have to find the style that makes sense for that film, right? And thats one thing that Chris and I talked about a lot at the beginning of the movie is that we wanted to deposit Roz on this island, like a fish out of water. She doesnt belong there. So, on the surface, obviously she doesnt belong there, but we didnt want her to belong there aesthetically. Shes this precise, futuristic machine in a very loose, painterly deconstructed world, and immediately theres a juxtaposition there that is a conflict. And so you feel that she shouldnt belong there, but she really doesnt feel like she belongs there. And through the course of the film, she gets beat up and dirty and banged up and all that. So theres a progression of her wear, but we wanted there to be a very subtle progression of her aesthetic as well.And thats what was really exciting because it comes back to serving the story, in that she slowly starts to make an impact on the island with this relentless pursuit of kindness trying to help everyone. And, at the end of the day, the island is actually impacting her as well. So her aesthetic is changing sequence by sequence ever so slightly. And if we did it successfully, youre not paying attention to that. But then at the end of the movie, when Roz comes into contact with Vontra and the other robots, youre like, Holy crap, she now fits in the world and they do not. Now its jarring for them, because they dont belong there.Thats what was really exciting, because we were able to weave in this aesthetic that supported the storytelling in a really novel way. We evolve other things as well. Rozs locomotion changes from something that is more rigid or efficient, robotic-y, for lack of better nomenclature, into something thats more fluidic or animalistic, like S-curves with her arms. She does less peacocking; she uses less of her futuristic tools as she moves through the film. Shes doing all these crazy light shows and using all these things at the beginning of the movie. But then as she progresses, shes more restrained, a little bit more subtle in her mannerisms. So theres all these subtle cues that the audience might not pick up on. Its almost imperceptible, but the amalgamation of them, you feel her progression through the film. I think thats what was really rewarding aesthetically is you could use that, the style, to drive the story.b&a: What were the tools that DreamWorks already had in the bag to do this, but then where did you take things further? Doodle is a tool, for example, that enables you to do a lot of 2D animated elements as part of the 3D. Where do you take it here?Jeff Budsberg: We were using Doodle to some extent in visual effects. We had this idea, coming out of The Bad Guys and Puss Boots: The Last Wish, that you wouldnt ever make a realistic bush and then filter it to make it look painterly. It doesnt make sense, its not efficient and it doesnt give you a very pleasing result. So if you want a painterly bush, paint the bush, right? The takeaway there is, find the best place to solve the problem where every department along the way had to adjust their workflows to make the final image. So, on the film, modeling is not meticulously constructing a network of branches and leaves, they are drawing the bush in 3D with Doodle. It lets them think about shape language in strokes, splatters, and brushstrokes. The leaves dont need to connect, they could be splatters of flowers and color.Similarly for look dev and surfacing, think about how an artist would paint volumetric shading. What are the non-physical shaders that you need so that youre adding high-frequency texture in the key light, but on the shadow side, removing a lot of that superfluous information. Take for example bark on a tree; we would actually paint the lit side of the tree with a different texture than the shadow side of the tree. How do you build that into the renderer where you could swap textures and detail on the fly based on the light conditionsthats what we had to solve.Then theres feathers and fur. You want the richness and sophistication of real fur and feathers because you want to be able to see the fur moving in a sophisticated way. You want to be able to feel characters running their hands through fur, fluff their feathers, or have wind in there. You want to have these micro details that are amazing with fur and feathers. But we dont want to see every fiber of the hair. Thats not how one would paint it. So how do you reveal detail very surgically in specific areas? And how do you do that in a way that might not correspond to the geometry at all? Perhaps you want to add splatter or spongy texture in the key light of the fur, youre like, It doesnt even make sense. So your geometry might be fur, but were using brushstrokes to inherit the render properties that would drive the derivatives, the normals, the opacity. So, we use all sorts of different geometry to manipulate the light response driving non-physical shaders. Similar to a painters approach our shaders start with a really rough underlayer of loose detail and then youll add textural details on top of that as different accents, and those are only revealed through specific lights.b&a: I was talking to Chris Sanders about how actually doing it that way, its more stylized looking, but in some ways, its more believable.Jeff Budsberg: I think it comes back to what I mentioned before is that a handcrafted quality allows the audience a way to enter the film and fill it in their brain. It gives you a more immersive, imaginative experience compared to something where every leaf was detailed out, or every piece of grass. Its almost like your brain takes a step back. Youre like, Oh, this is so much visual information, I need to take a step back. But I think thats why we all go to the gallery to view paintings; you get to feel and experience a world through the glasses of the artist. You get to see the vision through their eyes and experience it with them. And I think it brings you something thats a little bit more of a visceral experience. And maybe thats what Chris is getting at when it feels more real because I think it feels just more inviting, a little bit more endearing. It feels handcrafted, it feels well-thought-out.b&a: How were you dressing the set, especially the island, on this film?Jeff Budsberg: We used a tool called Sprinkles, which is an art directed set dressing tool. I think one of the key developments for dressing the world is this integration between set dressing and modeling and being able to design bespoke plants on the fly and draw them. Were living in this interesting space between 2D and 3D. So, you could be in 3D dressing plants, but then in 2D, just drawing the plant, placing them, and youre living in this world where you bridge between illustrator and 3D artist. For Doodle, youre building these animation rigs on the fly as well. So all these plants could be articulated, they could all be interacting with the character and blow in the wind.b&a: I think thats what maybe people who are not familiar with the 3D process probably dont realize. Theyre like, Well, if its so 2D, why not just draw it?Jeff Budsberg: Exactly.b&a: But the characters brush past it, the camera moves.Jeff Budsberg: You step on the plants, you brush past them. This is one thing we talked about a lot is that we can give the audience something that you cannot do in 2D. You can do those dynamic camera moves. You can move through the space. You can do wind or deformation of the plants. You can do art directed depth of field, like rack focuses. There are things that you can do in the 3D world that you cannot do in the 2D world. And I think that allows you to move through the space in a way that is immersive and really inviting. But on the 2D side, theres something endearing about the handcrafted-ness. So, if we can live in this space between the two, I think that gives you something novel and really exciting. And I think thats what people grab onto like, It looks like a painting, but its moving. What is going on? This is crazy.b&a: When you mentioned a moving painting there, that is how I felt sometimes watching the film.Jeff Budsberg: Yes. The other key developments we made were adding brushed partial-transparency everywhere. So thats the other very typical thing of a CG render is that every edge is very hard and crisp. In a ray tracer, its very expensive to use transparency or even just onerous because we would want to do some processing in compositing, but a lot of compositing operations require AOVs like depth information or positional information, normals. But the problem with traditional renders is you only have one sample of the position at that pixel. But what if you have transparency? There are actually multiple objects at that pixel. You might rely on deep images, but deep images have significant problems because theyre theyre expensive to render and it makes your compositing very slow. You also lose pixel filtering and a lot of other features that you have in your traditional AOVs.So, we actually created an extension to the Cryptomatte data format where its a layered approach to your data channels. So every pixel in a Cryptomatte has an ID of what asset it is and the coverage information of that pixel. But then you store multiple layers, you have the sum of all of the assets at that pixel. Well, we decided why dont you do that for position? Why dont you do that for the normal? Why dont you do that for other data channels? Because what that allows us to do is use the really novel smart filtering operations from Bad Guys or Puss in Boots, in Wild Robot with transparent assets.We were able to use really sophisticated filtering operations, but with layered transparency, which would be normally very difficult to do. We werent really using a whole lot of transparent assets on Bad Guys and Puss in Boots. Having feather transparency and broken up edges really added to that believability that it felt like a brush had been applied and that texture was running over the page.b&a: What other things were you doing to make it feel more painterly?Jeff Budsberg: We had a scene sprites tool, where the lighters could decide that they wanted to perturb the image even more. Everyones reaching across disciplines in novel ways. The modelers are thinking about drawing plants. Surfacers are set dressing, could also draw the plants, and are thinking about lighting. Character effects, they usually do the groom, but the grooms influence the final aesthetics. Everyones thinking about the final image. The lighters, theyre authoring new assets with these sprites that are adding textural detail into the scene. But then they could use those in compositing to manipulate the image in a way thats still coherent spatially and temporally. And so they could add splatters of paint in the environment to help break up edges. We call that badger brushing, where you smear edges almost with a bristle brush.They could take those assets which are already pretty stylized, but they can push it even further and be like, Okay, well this shot needs some sort of painterly depth. Im going to layer in a couple of different textures of these scene sprites in the scene. They exist in 3D space so that the camera can move through the space and the characters could interact with the space and we could use those deep Cryptomatte-based filtering operations to push and smear the frame around like a painter would by painting wet-on-wet.b&a: Theres a lot of birds in the film, what did you have to do in terms of feather development or even crowds here?Jeff Budsberg: There was a lot of novel feather development in the rig. We actually developed new approaches for scapular feathers in the back. Theres a really fascinating way when birds fold their wings where this scapular, you can think of it almost like a cape where they fold their wings out and youre like, I dont even know where the wing went. It folds in a really novel way and so we wanted to build that into the rigs. And same thing with the pocket. So trying to put the wings in the pocket, which is really fascinating how they nuzzle them in and the wing disappears.In terms of other novel developments in the rigging for birds, theres this flap of skin called the propatagium. We were trying to simulate how that deforms and stretches as the birds are extending their wings. Those are some of just a handful of new developments for the bird rigs. And then for crowds, obviously its scaling that to thousands and thousands of birds. So, you have your fully fleshed out rig, then you have multiple simplified rigs along the way.But that wasnt necessarily the largest challenge. The largest challenge is, How do you simplify the geometry but then still make it feel painterly? You want to start to remove the same way you would in a painting, the birds that are further away, theyre almost look like blobs of paint, right? So youre trying to remove a lot of that high-frequency information. So, its the same thing where were using a lot of those tools I talked about, those scribbles and smears and processing the geometry in a way thats temporally stable and also respected to how far these things away from camera.That required a lot of investigation of, how do we achieve what we love about the Studio Ghibli films and Miyazakis world, but we want to live in this world of 2D and 3D? You wouldnt just use a 2D tool and just paint everything because that doesnt allow you the sophistication of a 3D world. So, Doodle allows us to take the 2D stroke data and project it in the 3D space, allowing you to spawn new simulations off of those. You can draw an explosion, which could be an emitter into a smoke simulation. And then that smoke simulation could affect brushstrokes through it, and then you could smear the brushstrokes in compositing. So you have this really interesting blend of 2D, 3D, simulation, compositing, and youre going back and forth almost like a painter would. Youre building up the image in a way that youre trying to use the best of 2D and 3D.If we need something like an ocean or a lake surface, we could use some sort of FFT or procedural way to deform the ocean or the water, but you dont want to see every micro ripple. So, we thought, we could process the geometry so you dont see all those high-frequency ripples, but then draw in some splashes. Its really interesting as an effects artist, to try to push the image in a way that does not need to be physically accurate, but it is physically believable. It is all the same motif of editing the image through the artists lens and trying to make it feel like it was handcrafted.Read the full coverage of The Wild Robot, and several other animated features, in issue #25 of befores & afters magazine.The post If you want a painterly bush, paint the bush appeared first on befores & afters.
    0 Comments ·0 Shares ·119 Views
  • We called it hands through face
    beforesandafters.com
    Behind that stunning visual effect in Deadpool & Wolverine where Cassandra moves her hands through Paradox, Wolverine and Deadpools faces.Few icky visual effects shots have resonated so widely as the moments that Cassandra Nova (Emma Corrin) uses her powers to run her hands literally through the faces of other characters in director Shawn Levys Deadpool & Wolverine. These include Paradox (Matthew Macfadyen), Wolverine (Hugh Jackman) and Deadpool (Ryan Reynolds).For befores & afters, production visual effects supervisor Swen Gillberg breaks down, in his own words, exactly how those shots were achieved, including reference gathering, concepts and previs, shooting, and the final execution. As a special bonus, he also describes the moment Cassandra rips off Johnny Storms (Chris Evans) skin.From the comics: Early in prep, I came across these really crazy images in the comics of Cassandra putting her hands through peoples faces. Im like, Wow, thats wild. There was one specific image we saw of her hand going up through Paradoxs face in the comics. We used that as a style guide. Our goal was to create a photoreal version of the comics that was really true to comics. Obviously comic books and real life dont look alike, so its an interpretation, but that was the goal.We also did a bunch of early animation tests just to try to figure it out. And early on, along with the comic book reference, we scoured the internet for all the gross reference that you can imagine. Getting the performance: Having worked on Thanos and having Josh Brolin there as Thanos meant that one of my mantras was to always get the actors to act together to get their performances so they can look each other in the eyes. And so, we had to come up with a way to shoot these Cassandra scenes so that these actors are together and not in separate rooms.There was early talk about making a buck that Emma Corrin would put her hands through. But it was more important to me that Matthew Macfadyen and Emma could look each other in the eyes. So, we had to figure out a way to shoot together. Now, one thing to note is that that whole Cassandra through Paradoxs face scene was probably three times or four times shorter, as originally scripted. Through reshoots, it lengthened considerably, we added a ton of dialogue to it, so it just needed to be ready for changes. We needed a fluid shooting methodology.I worked with Emmas stunt double first to try to come up with a way for them to be together. Basically, it ended up being, just put them together. I originally had made these prosthetics like a fake arm that Emma could hold and push into Matthews face but I abandoned that. What we ended up doing was quite simple in that we just had her put her arm off to the side, trying not to block something to camera. Id have her put her hand on his hip or on his shoulder, and then we would do clean plates and digitally remove her arm and replace it from the shoulder down. That meant it was a digital arm up through his face.Building Paradox: ILM did the hands through Deadpools face in Cassandras lair. Framestore did the shots through Wolverines forehead and through Paradox. In order to get the effect working well, I think we spent six months on the asset of Paradox. We scanned Matthew at Clear Angle Studios and then Framestore took that and made an incredible asset that matched Matthews principal photography look exactly, and it had to hold up for extreme close-ups.Myself and visual effects supervisor Matt Twyford at Framestore wanted to be able to leverage Matthews original performance. So the double needed to directly split with itselfit needed to be an exact match. Indeed, in post, we used as much of Matthews plate photography as possible. We replaced most of it, but if we kept his hair or one ear, we would so that the match had to be just right.Incorporating reshoots: Interestingly, when we did the test screenings, we found that with the Cassandra character, the audience needed to relate to her more. So we gave her a bunch of additional lines and that became her and Matthews scene. Because our timeframe was tight in post for reshoots, we really wanted to keep the principal photography of Matthew. So we reshot all of Emmas new dialogue with both of them, but we shot it in such a fashion that we could reuse Matthews original plate photography. So, most of that scene is original plate photography of Matthew and new photography of Cassandra. We used poor mans motion control to match each frame, split her out, put her into the original plate photography, and married the two.Proof of concept: We storyboarded the whole thing, then we previsd the whole thing. Previs was a great learning experience. We had to figure out which side she was on him and how she would drag him, and if their hands would be fingers up her fingers down or to the side. We figured out a lot of the mechanics in previs and then we post-visd the whole thing after we shot it. During principal photography, we did gross keyframe animation and it was just gray shaded. We would do a picture and picture of the plate, and then wed do a gray shaded version just showing where her hands were roughly going. I would do notes myself and Shawn would do notes, and we kept honing that in.How the shots were done: The steps towards final started with blocking animation. Then we would do one more pass of more refined animation before final keyframe animation and locking it off. Then wed do a simulation on top of that and that we would get all of the creases and a next level of detail of hand interaction. There was then a creature pass, which was a detailed modeling pass where we get the intersection of her fingers with his skin, and then we did a final pass on skin thickness and intersection.I really wanted to make the skin very thin in order to see the detail of her knuckles under the skin and her cuticles over her nails. We needed to keep her hands below the skin, but make it as thin as possible so we could see the details underneath. Very late in the game, we added in that shot of her hands going under the eyes. Originally the hands went over the eyes and we put them under the eyes to make the eyes bulge out, which I think was a great addition.What we called the effect: We were not super-elegant, we called it hands through face. We did call it Cassandras Magic for a while, but I always found that confusing with her other magic, and there were a bunch of other magic powers that didnt make it in the movie. We were trying to keep her grounded. Along with really trying to creep the audience out, and it was our goal to make the visual effects not take you out of the storytelling. Its my favorite visual effect in the movie, and we put a lot of effort into those assets and to make it really, really, really solid. That other effect: There was also the moment she rips Johnny Storms body skin off. ILM did that shot. For that, we had to do some terrible internet searches that you never actually want to look at. We storyboarded it first, and we used that as a guide, then we searched the internet, and then we really took creative license and dialed the look of the skinless Johnny. We removed his stomach muscles so we could see inside him. We took creative license to get this iconic image that we had in our minds, where we wanted to see his intestines, which you technically wouldnt see if you took the skin off. We also played with different amounts of blood coming off.The eye blinks were in the original storyboards. Originally, it was thought they were too campy, but I kept putting them in and putting them in, and we finally all just laughed our butts off and it stayed in.I think the trickiest part was getting the simulation of the body falling right, not just falling straight on the ground. We did a simulation and then we ended up handcrafting itPut the liver here and then have it tumble over, etc.When we were shooting that, Chris Evans had such a blast. Hes always taken so seriously because hes Captain America, so he loved getting his ass kicked in this one. When Cassandra rips his skin off, he would duck out and run away. And I was standing right here, and Ryan was riffing on the lines, as he always does, and he came up with the, Not-my-favorite Chris. And Chris couldnt contain himself. He just was crying, laughing.The post We called it hands through face appeared first on befores & afters.
    0 Comments ·0 Shares ·122 Views
  • Stop-motion! How Mama Crab was made in Skeleton Crew
    beforesandafters.com
    A new behind the scenes featurette breaks down the work by Tippett Studio.The post Stop-motion! How Mama Crab was made in Skeleton Crew appeared first on befores & afters.
    0 Comments ·0 Shares ·117 Views
  • A Major Leap for Professional Animation Editing
    beforesandafters.com
    iClone 8.52 delivers a highly anticipated update for professional animation editing, packed with groundbreaking features driven by user feedback. Despite its numerical step from 8.51, this release introduces transformative tools to elevate animation workflows. Check out the iClone 8.52 features video for more!Overview of iClone 8.52 new featuresAccuPOSE: Revolutionizing Natural 3D PosingAccuPOSE, an AI-powered innovation utilizing deep learning models trained on ActorCores extensive motion database. AccuPOSE understands human movement across countless scenarios, enabling users to create highly natural poses with minimal effort. Beyond hand-key animation, this technology plays a vital role in connecting motion clips, refining motion layers, and dramatically simplifying mocap data cleanup.Control Full Body Movement: Achieve full-body transformations merely from minimal inputs. AccuPOSEs real-time body pose estimation guarantees natural postures during editing.Pose-Driven Gestures: Arm adjustments naturally drive wrist movements and hand gestures, generating authentic human behaviors with minimal effort.>> Check out the demo on the product pageFully Controllable AI GuidanceTo work in harmony with AI technology, controllability is a huge factor. The real brilliance of AccuPose lies in its unmatched flexibility and precision. You have the power to decide which parts of the body the AI adjusts and which parts remain untouched. This allows you to fine-tune every movement to your exact needs.Precision & Freedom: By applying Transformation and Rotation constraints to selected joints, designers can control AI pose suggestions with precision and flexibility.Constraints & Lock: Utilize T/R Constraints to maintain manual control flexibility. Use the Lock feature to securely fix the end effectors in place.AI Suggestion: You can specify which part of the body the AI should provide suggestions for, while preserving the desired sections by using locks and move/rotate constraints.Mirror: The Mirror feature ensures that AI-generated poses for hands or legs maintain symmetry with mirrored postures during any adjustments>> Check out the demo on the product pageSeamless Integration with Existing WorksGetting started with AccuPose is straightforward. Simply open the tool, select an AI behavior model, and transform any pose effortlessly with the guidance of natural posing techniques.Works with Normal Poses: Transform existing poses into AccuPoses with selected behaviors, and experience the precision of natural posing with customizable IK constraints.Clip Layer Editing: By incorporating AI-assisted layer keys, users can effectively enhance motion velocity in selected motion clips.Keyframing from Library: Easily add dynamic poses from the AccuPOSE to arrange keyframes on the timeline, optionally filling in custom in-betweens and curves for smooth transitions.>> Check out the demo on the product pageAccelerates All Key Animation WorkflowCurious about how AI-assisted posing can enhance your workflow? Its incredibly adaptable and can be applied wherever setting keyframes is needed, making it a valuable tool across various creative tasks.Handkey Animation: Driven by deep-learned human motions, the posing suggestion enhances the efficiency of quality keyframe posing and supports the creation of compelling animation styles.Connecting Clips: The intelligent posing feature simplifies the process of filling in missing poses between motion clips, creating more natural transitions.Mocap Correction: The natural AI posing capability can effectively solve the mocap challenges such as shoulder stiffness, motion distortion, and even enhances animations with realistic hand gestures.Curve & Trajectory: iClones motion curve and motion trail functions empower animators to refine keyframe timing and spatial trajectories, ensuring high-quality animation outcomes.>> Check out the demo on the product pageDownload AccuPOSE CORE Free and Subscribe INFINITY for All3D Posing with the assistance of AI might sound exciting itself, but AccuPOSE is way more than that. It comes with a big library of AI-training models that answer your diverse scenario demands.The same control applied to a selected joint can yield entirely different results depending on the chosen AccuPOSE model, giving users absolute control over the characters AI reactions.AccuPOSE is a free plugin for iClone 8.52, all users will have access to the CORE Library, it supports fundamental human body movements like stand, sit, walk, kneel, lie, prone, etc.For those seeking advanced features, the INFINITY expansion offers an extensive library of over 1,000 AI-trained poses across 41 genres, along with continuous updates.Download & subscribeElevates Motion Editing to Pro LevelThe iClone 8.52 release brings four highly acclaimed professional animation features, voted by animators as essential to their workflows. These tools are meticulously crafted to serve as the foundation of daily productivity.Motion TrailThe motion trail feature enables professional animators to visualize trajectories for selected joints. It supports real-time editing of keyframes directly on the trail or within the curve editor, providing an efficient approach to managing motion paths and animation timing. Additionally, users can adjust the path range and customize the trails color to improve control and visualization.>> Check out the demo on the product pageNon-Destructive EditingUsers can now break motion clips without worrying about losing data, as cut-out portions remain recoverable. Additionally, for clips edited with motion layer keys, our non-destructive approach retains all layer data for further editing.>> Check out the demo on the product pageCurve FiltersSeveral new filter options are added. These include tools that remove jitters while preserving motion details, fix peak noise, reduce keys for bezier spline editing, and smooth selected tracks seamlessly.>> Check out the demo on the product pageCurve Performance EnhancementMotion Curve performance has been significantly enhanced for intensive editing across multiple tracks. Users can now enjoy near-instant responsiveness when selecting, moving, copying, pasting, or deleting keys in extended curve sequences.>> Check out the demo on the product pageRelated SourcesiClone 8.52 full announcementAccuPOSE TutorialsCheck out iClone 8.52 new featuresiClone 8.52 release noteLearn more about iClone 8Online manualFAQBrought to you by Reallusion:This article is part of the befores & afters VFX Insight series. If youd like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.The post A Major Leap for Professional Animation Editing appeared first on befores & afters.
    0 Comments ·0 Shares ·111 Views
  • The creature effects in Wolf Man
    beforesandafters.com
    Special make-up effects designer Arjen Tuiten gives a glimpse of what to expect in Leigh Whannells Wolf Man.The post The creature effects in Wolf Man appeared first on befores & afters.
    0 Comments ·0 Shares ·115 Views
  • Heres how PXO AKIRAs new motion base vehicle processing ecosystem works
    beforesandafters.com
    It was just announced at CES.Pixomondo (PXO) has just revealed its latest virtual production toolset for vehicle processing. Its called PXO AKIRA.Essentially, PXO AKIRA is a custom-built 360-degree spinning motion base designed to be directly integrated with a robotic camera crane (a Technodolly), an LED volume, driving simulator, and real-time rendered content powered by Unreal Engine.Car on top of PXO AKIRA motion base platform.The idea is to be able to plan, shoot and finalize vehicle shoots in a studio rather than via a traditional approach such as blue or greenscreen shooting, process trailers or just 2D playback on an LED wall. Instead with everything with AKIRA works in tandem to provide real-time results.And not just vehicles like cars, PXOs behind the scenes videos show motorbikes, boats and even planes being attached to the motion base for filming. AKIRA looks very much like a way to plan and execute a vehicle process shoot, but it is also about combining the setup with a LED wall volume shoot for in-camera final VFX results.Plane on top of PXO AKIRA motion base platform. Techno dolly track and camera to the left.Watch this technical reel, below, for a breakdown of how PXO AKIRA works.To find out more about PXO AKIRA, befores & afters asked Pixomono chief innovation officer Mahmoud Rahnama about the new tech.b&a: What led to Pixomondo developing AKIRA?Mahmoud Rahnama: Pixomondo developed PXO AKIRA after years of encountering significant challenges while shooting vehicles for films, TV shows, and commercials. Traditional methods like green/blue screens, process trailers, and even LED playback came with various limitations, from lack of realism to logistical inefficiencies.We recognized the need for an ultimate solutionone that could address these recurring pain points and revolutionize vehicle-based storytelling. After pitching the concept to Sony, they saw the potential and decided to fund this ambitious R&D project. Within a year, we designed, manufactured, and integrated a groundbreaking system thats now known as PXO AKIRA.Motion platform: Built to handle cars and boats to helicopters, planes, and even spaceships. It can spin 360 degrees and connects in real-time with the virtual environment to generate realistic motion and road feel. Each car wheel moves independently on the motion base.b&a: Can you break down some of the main technical hurdles you had to overcome with building a motion base suitable for multiple kinds of vehicles, and then aligning it with other virtual production workflows?Mahmoud Rahnama: Developing PXO AKIRA presented a unique set of technical challenges. The motion base needed to be highly agile and accurate while maintaining minimal latency. Flexibility was paramount PXO AKIRA had to support a wide range of vehicles, from cars and motorcycles to boats and planes, all with varying weights and dimensions. Noise reduction was another critical factor to ensure compatibility with sound-sensitive shoots.Integrating the motion base with the broader virtual production ecosystem was another challenge. The TechnoDolly, LED volume, and racing simulator had to work seamlessly together, controlled by our Digital Twin system. Each component needed to be finely tuned to create a unified and efficient platform that could handle any virtual environment under one roof. Balancing all these requirements while ensuring the system remained mobile and scalable was one of our biggest accomplishments.Programmable Camera Crane: The camera is crane connected to PXO AKIRA, complete with programmable, key-framable moves.b&a: How can AKIRA typically be deployed for a shoot?Mahmoud Rahnama: PXO AKIRA is designed for maximum mobility and efficiency. The entire system can be shipped to any LED volume or soundstage worldwide. Before deployment, we use our Digital Twin system to virtually map and pre-visualize the shoot, allowing us to optimize the setup and workflows in advance. Once on-site, PXO AKIRA can be quickly assembled and calibrated, ensuring minimal downtime for production teams. Our vision is to have multiple PXO AKIRA units stationed at key locations worldwide, enabling productions to book and utilize PXO AKIRA-enabled facilities without the need for transportation. This approach will save time, streamline logistics, and make PXO AKIRA accessible to global productions.Digital Twin & Pre-visualization Platform: A tool that lets you create a one-to-one digital twin of the LED studio, saving previs moves to translate to final pixels on set, aligning the virtual with the real world.b&a: What kind of testing and test footage have you been able to produce so far?Mahmoud Rahnama: Weve conducted extensive internal testing, both with synthetic environments and real-world locations. These tests allowed us to fine-tune PXO AKIRAs performance and optimize its integration with virtual production workflows.So far, weve produced promotional material showcasing PXO AKIRAs capabilities, and were currently planning a short film that will be entirely shot using PXO AKIRA. Additionally, weve received significant interest from productions eager to book PXO AKIRA for 2025, and we anticipate a busy schedule following its grand debut at CES. PXO AKIRAs launch marks the beginning of a new era for vehicle processing solutions, and were excited to see it in action across a wide range of projects.Driving Simulator: This allows a precision driver to authentically drive their route in the virtual environment before the camera rolls, ensuring precise movement, inertia, and direction.Find out more at http://pxoakira.com.The post Heres how PXO AKIRAs new motion base vehicle processing ecosystem works appeared first on befores & afters.
    0 Comments ·0 Shares ·119 Views
  • The new tech that made Mufasa possible
    beforesandafters.com
    Behind MPCs QuadCap motion capture, premium previs and 2,000 frame renders on the film.Barry Jenkins Mufasa: The Lion King is a fully CG film. However, it is intended to look as if it could have been filmed for real in Africa (just like its 2019 predecessor, The Lion King). To do that, the filmmakers employed an array of virtual production techniques such as VR scouting, virtual cinematography and real-time rendering to plan out sets and action and realize them in a manner that replicated a live-action feel (aided further by the final naturalistic animation and photoreal rendering).One of the new virtual production techniques relied upon on Mufasa was a tool called QuadCap, ie. a quadruped motion capture system. It formed part of the motion capture shoots for the film that took place in Downtown Los Angeles. A still from the official Disney Technology of Mufasa, featured below.Here, a number of the characters such as Mufasa, Sarabi and Taka were represented by performers in motion capture suits (with the resulting capture aimed at informing the final animation and helping with staging).Normally this would produce only bipedal motion captured animation, but QuadCap aligned the performers head and spine movements to the lions head and neck, their legs to the lions front legs, and simulated the lions back legs and hips. It was great because it offered so much flexibility for Barry Jenkins, observes Audrey Ferrara, who was MPCs visual effects supervisor on the film, working with production visual effects supervisor Adam Valdez, animation supervisor Daniel Fotheringham and virtual production supervisor Ryan Champney. On the stage, there would also be DOP James Laxton with a virtual camera and he and Barry could immediately say, No, it needs to be a little bit punchier in terms of the movement, or, Hold on there for a minute so we can really come close to you. It was art directable live in Unreal Engine.MPC was behind QuadCap and all of the virtual production and visual effects on Mufasa. Within Unreal Engine, a total of 12,680 on-stage takes were shot using the V-Cam and motion capture systems. Meanwhile, a total of 7,399 live motion capture and QuadCap performances were captured during the shoot. The VFX studio was involved the whole way, including in early pre-production during a COVID lockdown period with the director and cinematographer, and also production designer Mark Friedberg, and later in Los Angeles. At this early stage, concept art led to early set builds and then VR scouts to help flesh out the world. As sets and shots continued to be planned, so too did lighting. All of this occurred within an Unreal Engine sandbox crafted by MPC.Premium previsThe ultimate goal of this prep work was previs at a high fidelity level. Adam Valdezs goal on this one was, We need to have premium previs, relates Ferrara. In fact, it was to not even think about it like previs, but more like going into the first pass of the movie. This meant our sets were way more detailed in terms of textures, and even the first pass of effects were generated, including water. If there was fire or rain, it would be there, all in Unreal. And then, continues Ferrara, James Laxton would spend a lot of time setting up his light rigs in Unreal. With ray tracing enabled, it gave him and everyone a better idea of how the final shots would look. It can be so hard to project yourself into the final image. It takes so long to get there, usually, so we wanted to give them something that would be shaped closer to what they wanted to achieve sooner in the process.MPC developed new tools to export the resulting Unreal Engine files into post-production, too. We would save the animation, says Ferrara. We would save the light rigs. We would save everything: the cameras, the environments. It meant we had our blocking of pretty much everything. What Ferrara says was also a huge benefit of this premium previs process was effectively having the entire film inside of Unreal Engine. If we wanted to do re-shoots, or if we wanted to explore different approaches, you could go back in there. The lighting was already set-up, you just export, and boom. We even pushed final animation being done in Maya back into Unreal, and when that was rendered in there it really helped with editorial. The previs was the Bible, the cornerstone of everything. As a VFX supervisor, I would constantly go back to the previs. Suddenly, you are not constrained by the pipe anymore. Its malleable, its flexible, and thats great.Going bigger on MufasaThe world in which Mufasa travels in the film spans some 107 square miles. Thats about the same size as Salt Lake City, Utah, all of which had to be created by MPC. 77 digital sets were created, with 5,790 assets such as trees, plants and grass species were built, and another 118 photoreal creatures made. By the end of the film, the VFX studio would have completed 1500 fully-CG shots, and using 25 petabytes of storage (rendering the film in final quality took 150 million hours).The process began with Friedbergs teams concept art and builds, and was also informed by a team that visited several countries including Namibia, Botswana and Tanzania.That team came back with something like 10 terabytes of material, describes Ferrara. Just tons of videos, photos and photogrammetry. The MPC team in post-production would process the photogrammetry of, say, columns in a canyon and share them with the art department. Then they would be able to use it in their layouts for the scouts. Just like the Unreal previs, we had those assets traveling between pre-production and post-production all the time. In addition to the build of many landscapes, MPC also had the challenge of the camera generally coming much closer to the characters than the previous film. The camera comes close and back and keeps going back and forth, says Ferrara. So we needed those characters to hold up very close-up. Theres way more details in the model, way more hair in the groom. We had to rebuild the lookdev and the shaders of the fur from scratch in order to have this complexity.Each lion in Mufasa featured over 30,000,000 hairs to achieve the realistic look of fur. Just Mufasas mane on its own was made up of 16,995,454 hair curves. The lion has 600,000 hairs on his ears, 6.2 million hairs on his legs, and 9 million hairs covering the middle portion of his body. Some shots feature constantly moving cameras and long frame ratesexceeding over 2,000 frames at times (that were also realized in stereo). Then there were effects simulations, including environments. Simulation of the environments was crucial, states Ferrara. We had to make sure that this world was dynamic and kinetic. Those characters are moving creatures moving through a moving world. So the grass, the trees, the simulation of the air, even of the pollen in the air. It was all about making sure that it didnt feel static and that we werent just putting characters straight onto some kind of backdrop. Water, of course, was a major effects simulation task. It existed in several states like rivers, rain, mist, snow and clouds. Says Ferrara: The water is a character in the story. It goes hand-in-hand with Mufasas story, which is his fear of water, the water that separated him from his parents. It needed to be art directable because it needed to perform in order to match the performance of the characters.The films landmark flash flood sequence, in which Mufasa is washed away, was one in which MPC started the process in effects and then went back and forth between effects and animation. We had to almost rehearse this process even before it was turned over, advises Ferrara. It was like us getting ready to go into the arena for the fight and be prepared. Snow, too, was a challenge for the MPC team. My personal fear on this movie was the snow and how to make snow look good and realistic, admits Ferrara. I had this one shot that was my personal nemesis throughout the movie, which was when Rafiki falls down and makes a snow angel. That one gave me nightmares. But it was also very satisfying when we cracked the code and made it work.And cracked the code, MPC did. One particular snow angel shot required the simulation of over 620 million snow particles. The post The new tech that made Mufasa possible appeared first on befores & afters.
    0 Comments ·0 Shares ·118 Views
  • How to get VFX into your Sundance film
    beforesandafters.com
    Today on the befores & afters podcast, were chatting to VFX supervisor Alex Noble, who is the founder of Wild Union Post. Now, while it works on big studio films, Wild Union has crafted a lot of visual effects for independent films, including those submitted for the Sundance Film Festival.In this chat, we talk about all things indie related, and how to get the best kind of VFX work done for your indie film. Alex has some fantastic tips and tricks for making the most of your VFX budgetthings that apply all the way from small films to much larger ones, too.This episode of the befores & afters podcast is sponsored by SideFX. Looking for great customer case studies, presentations and demos? Head to the SideFX YouTube channel. There youll find tons of Houdini, Solaris and Karma content. This includes recordings of recent Houdini HIVE sessions from around the world.Listen in above, and below, check out some imagery from Wild Union.The post How to get VFX into your Sundance film appeared first on befores & afters.
    0 Comments ·0 Shares ·118 Views
  • I still have red dust on my motion base
    beforesandafters.com
    The practical and digital effects that made sandworm riding possible in Dune: Part Two. An excerpt from befores & afters magazine.A signature moment in Dune: Part Two occurs when Paul Atreides (Timothe Chalamet) successfully learns to ride a sandworm, truly immersing himself into Fremen culture. The sandworm riding was one of our first Zoom calls which Denis had with the heads of department in soft prep, recalls visual effects supervisor Paul Lambert. Denis pitched this idea of how, to actually get onto a worm, he would have to climb up a dune and call the worm. The worm would crash through the dune and then Paul would fall onto the worm. He would travel down through the sand and onto the worm, which was an absolutely amazing visual. But, we were kind of stunned as to how the heck we were going to do this given our want for a practical approach.For Denis, it was extremely important that all the worm riding and all the scenes with the worm needed to be believable, remarks special effects supervisor Gerd Nefzer. Denis was concerned about how we could put this on the screen to make it believable, to make it look good, to see the speed of the worm, and the size, the huge size of the worm. Right from day one, right when we got the script and we had the first meeting, he said, This must be good and believable, and that the audience would say, Okay, we believe that, and we see the size, and we see the speed.Working off storyboards and limited previs, the filmmakers decided early on that one thing would be clearthe scene would be filmed outdoors to give the appropriate sense of light. Initially, a recce was arranged in an attempt to find a suitable dune or dunes to film on. That approach changed when DOP Greig Fraser pushed for the moment Paul is on the dune to always be backlit by the sun. While one location served as a wide shot, the moment Paul plunges down into the sand as the worm breaks through would therefore have to be achieved with a bespoke setup, that is, not on a full-scale dune. Instead, production built a much smaller man-made dune in the UAE under which Nefzer placed steel tunnels and drums. A stunt performer attached to a wire would run above this setup. As the performer got close to the tunnels and drums, those would be pulled away, causing the faux dune to collapse.Sand is tough to work with, advises Nefzer. Its kind of like water, its very, very heavy and you have to keep it under control because its going everywhere. But, we were able to build this dune out of three meter diameter and eight meter long pipes. We dug the hole out of our dune, placed these three cylinders under the dune, filled it, and made it look nice with wind machines and leaf blowers. We also rented three huge trucks from an oil company in UAE. They were really unbelievably big, and we hooked the steel pipes onto the trucks.As noted, the stunt performer would run on top of this man-made dune. On cue, Nefzers team would orchestrate the pulling out of the tunnels and drums by communicating with the truck drivers. The setup was highly successful. However, for the correct lighting to match the original dune, the stunt had to take place at a certain early time of day. And, once the dune was collapsed, it took a day to get everything back in place, meaning only one shot could be attempted daily. The first four days werent successful and we were disappointed, admits Nefzer. But then I think on the fifth or sixth day, we got the right timing. It took four attempts for us to actually do it, but what we got was gold, adds Lambert. Having got that footage, I was then able to extend the real photography with additional CG elements so that when we looked down, we could see the worm passing through. View this post on InstagramA post shared by Gerd Nefzer SFX (@gerdsfx)For the moment Paul slides down into the dune, a stunt performer was pulled down on a sled, which provided for plenty of practical sand kicking up around him. Further practical layers of sand and CG sand were added, too. It was kept deliberately messy, notes Lambert. The idea was that you were never going to make it a beautiful shot with this because we always try to say, if we did this with a camera for real and up on the top of the dune and actually shot this, how much would you actually see because theres going to be dust and sand everywhere? So thats basically what we piled on, and it got to the point where we put on so much that you couldnt actually see the action, so we turned it back a bit. The idea was to fully, fully immerse him in the sand and dust.When Paul rolls down the worm and places his maker hooks, Nefzers team built a platform that was 25 meters long and eight meters wide that was leaned against the studio wall. It allowed Timothe Chalamet or his stunt performer to jump on something practical, get the hooks in place, slide and stop.A motion base fit for a wormThe next step was to depict Paul on the worm. A dedicated Worm Unit was established to handle this work (led by the films second unit director Tanya Lapointe). Once Paul gets control of the worm and is able to use maker hooks to clasp onto the creature while in motion, production took advantage of filming in a more controlled environment in Budapest. Here, Nefzer established a worm riding motion base. We set up a similar thing that we had done for the ornithopters on Dune: Part One, outlines Lambert. This was where we had the cylindrical kind of dog collar which encompassed a gimbal/motion base which had a section of the worm created so that the stuntie could actually stand on it and hold on to the ropes. The idea being that, to try and sell something as being immersed in the desert, you need to immerse it in the bright light and the bright bounce of the sand. This meant a sand-colored collarsandscreenssurrounded the gimbal, and would bounce light from all directions. Furthermore, the stunt performer on the gimbal was constantly bombarded with sand (to the point, says Lambert, that he would start with a suit which was black and a bit sandy, but then by the end of the day, he was way more orange. So we did have some more continuity issues to actually try to deal with that.) Some aerial plates of the wormriding filmed with a low flying helicopter were also captured and then sped up to match the right sense of movement. Having that particular setup and then having Greig lens it in a way that was sometimes on a long lensand then we would have the worm rise and twist and turnmade for a very dynamic kind of feel to the whole sequence, says Lambert.Nefzers motion base setup was also deliberately placed deep inside the dog collar thanks to a specially dug hole, essentially so that the motion base and the performer was not up so high in the air that Denis would have trouble communicating with them. I had learned that after Blade Runner 2049 and Dune: Part One, notes Nefzer. So we dug a big hole in the backlot and got the motion base into the hole. It really helped a lot.The motion base was constructed with a Lazy Susan-type functionality that allowed it to be turntabled around, always directing the worm skin to the sunlight. That was mega-important for Greig Fraser, says Nefzer. One particular motion base rig was fitted with a separate hinge that allowed it to tilt 90 degrees, the idea being that Paul would hang off the worm skin until he could control the riding. Other configurations were made to handle different angles.We have used motion bases on many other projects, but never with a worm on them, continues Nefzer. Usually with motion baseswhich are high-tech, computer-controlled machines running hydraulic oilyou are running them in very clean environments. This time, every day, they were covered completely in dust and dirt. We were really scared about getting some failures in the computer or on the bearings, so we tried to cover all the joints and hinges and covers with plastic bags. Then of course we shot it outside, sometimes in the rain. I still have red dust on my motion base!Once Paul stands atop the moving worm, he is hit with rounds and rounds of dust pushed through wind machines. Much of that was there, practically. I think the most dust that we used in one day was a ton of dust, says Nefzer. When my crew came back after shooting that, you couldnt really tell who they were. They were completely covered with dust.The dust was made from bentonite, a substance used in the make-up industry. The tricky thing was that we had to match the color of the Jordan desert and the Abu Dhabi desert, describes Nefzer. The Jordan dust or desert is very reddish and Abu Dhabi is not as red. It can also look different when its airborne compared to when you have it in the hand. The size of the corn of the dust was also important. If it was too big, youre not able to blow it like sand. We had three or four 70 kilowatt electric wind machines on Pettibones and on lifts and forklifts surrounding the worm skin. It was all about finding the right consistency.Facilitating Pauls rideUsing the practical effects photography as a base, DNEG played a significant role in how Paul came to be riding the worm. This started with the shots of the character running along the dune itself before it collapses with the worm going through underneath. Because they were constantly rebuilding the pieces of dune, we had to actually sculpt that for every take that they did, discusses DNEG visual effects supervisor Stephen James. We had some really incredible sculptors who did this by eye, and then that could blend down into our CG work. What that allowed us to do was match the crest of the dune really precisely, continues James. It allowed our compositing team to go through and paint back frame by frame footfalls, dust kickups and things like that. Our effects team would actually have to go in and precisely match things like foot kickups and the collapse as well, and then extend that into the wider simulation. So, there was a lot of really challenging work from effects to lighting to comp, to really precisely hit those blends, because it was so important that we did keep that magic that was in those plates. There is that energy that is in a lot of these plates that we just had to keep or we would lose something that was just too important.Importantly, too, DNEG was able to use the information from the plates to directly inform its own CG work, such as, James mentions, the way that light scatters through the dust and the way that the color and tone shift as light passes through. We took that practical photography reference into our full CG shots later in the film; everything we learned from those plates.For the subsequent moment of Paul falling down and onto the worm itself, DNEG was only required to do some minimal clean-up here of background seams or fans. That was because the shots were intentionally already heavily dust-filled. We did add a little bit of grit to the sand just to add some texture quality to it, says James.Once Paul uses the maker hooks to cling onto and then control the worm, DNEG worked with the motion base plates. Notes James: I think that was probably the most challenging portion because we had a lot of dust and sand on set being blown across the surface of the worm, that we had to make sense of it at a much larger scale. So if there was sand blowing on the surface on set, maybe thats sand that we would add on top of our CG worm that was from previous sections where it was under the sand and that would be cascading through down the back of the worm. Or maybe there was a bit of history from a dune that was hit that was already in the air, so wed always have sand particulate and dust history, and we had to think about what was coming and what was already happening in the sequence.issue #23 Dune: Part TwoThe sandscreen dog collar motion base setup in Budapest provided major assistance to DNEG, too. It really helped us to get the natural light of the desert bouncing up onto Paul and the worm, says James. We were using the sandscreen, whether it was in light or shadow, and matching to that in our lighting and compositing teams. So, if the sandscreen itself was maybe a bit darker, we would try to light the dunes in the background to motivate that.The wormskin Paul stands on was mostly replaced by DNEGs CG worm, with the first films asset used as the starting point. Obviously we didnt get as close as that on the first film, details DNEG visual effects supervisor Rhys Salcombe, so we had to rework the first one to some extent anyway, to make it a bit more malleable in animation, in particular. Because Paul stands on such a tiny portion of it, we essentially worked with a nested approach on the worm. At its most macro level, you have an asset thats relatively detailed. Then you extract smaller and smaller sections as you get towards the saddle, the part that he stands on. Then the smallest section was a replication of the set piece, which was based on our sculpt from the first film. We kept as much of it as we could in some of the shots, but we did end up with a CG version that we could use to replace it if required.A worm through the landscapeDNEGs role in the sandworm riding sequence was also, of course, to realize the worm traveling through the sands of Arrakis, and to incorporate plate photography into their digital creature and environment shots. This started firstly with effects tests using their existing CG worm model from Dune: Part One, as Salcombe outlines. Very early on we were told that 90% of the worms mass would be underground during the worm riding, which obviously affects the behavior quite a lot. We had to work out, if you have 90% of the kilometer long animal underground, what does that displacement of sand look like? Well, it looks like complete chaos when you simulate that for real. So one of the first things we had to come up with was a way of figuring out how to make it look aesthetically pleasing while also hitting that brief.The studio embarked on a series of wedge tests to establish how deep their effects container for the worm should be and how much sand beneath the surface should be moved. Quickly they realized that a deep container made for a constant explosion of sand. So, instead, says Salcombe, we went for a relatively shallow container. It led us to doing a rapid prototyping of our effects for worm riding with a ball pit process.DNEG came up with a great way in which they could visualize it and show myself and Denis as to what they were thinking by basically using particles the size of beach balls first as a quick and cheap render to be able to show what this is going to do, says Lambert. I then talked Denis through how the beach balls then became tennis balls and then became little grains of sand. It was a great way to have the director involved.The ball pit was exactly what it sounds like, states James. Instead of grains of sand, youre simulating with beach balls, basicallyjust a low resolution simulation. Denis and Paul really insisted on it because it was really important to them we werent wasting time adding details to something that just didnt have the right feel to it or the weight to it or scale to it. We could really tell pretty early on from these low resolution sims that it had that right feeling.Paul and Denis were also okay to let the dust and additional sand do whatever it did, adds James. So, if there was a big burst of dust and sand that happened naturally in the simulation, they would just leave it, and it may cover a lot of really expensive, beautiful simulation work, but I think on the whole, through the sequence, what it created was something that it really felt quite natural. Visibility would come and go throughout the sequence and other worm riding sequences. It just created a real sense of danger coming and going throughout.One of DNEGs main challenges early on, too, was finding a look for different levels of sand and dust going from chaotic worm sign (where the sand debris is visible on the surface) to essentially having the worm melt a dune as it goes through the sand. Then as Paul gets more and more control, says James, this really chaotic worm sign and all the stuff thats happening around the worm, needed to become more and more calm over time. We needed to visualize that early on and really understand how things were flowing before we pushed through into the up-res.Ultimately, DNEGs tests determined that their worm would be moving around 300 kilometers an hour through the sand (the animation team was led by senior animation supervisor Robyn Luckham and animation supervisors Ben Wiggs, Hitesh Barot and Omkar Fednekar). The simulations would be handled mostly in Houdini as Vellum simulations, with renders done in Clarisse (including at IMAX-level quality). We had some tools in-house for point replication and data management, which were important for when youre dealing with that larger scale of sim. It meant that we could get those simulations up-resd from the ball pit and then use the point replication tools to up-res even further. When youre dealing with sand thats so fine, it essentially behaves like a liquid. As many points as you can throw at it, the better the result will be.Read the full issue of befores & afters magazine on Dune: Part Two, available here.The post I still have red dust on my motion base appeared first on befores & afters.
    0 Comments ·0 Shares ·123 Views
  • Here: A test with Tom Hanks
    beforesandafters.com
    The de-ageing Pepsi challenge. An excerpt from issue #24 of befores & afters magazine in print.Ultimately, there would be 53-character minutes of full-face replacement in Here between the four key actors: Tom Hanks, Robin Wright, Paul Bettany and Kelly Reilly. This was a significant amount of screen time. Shots were also often longup to four minutesand the de-aged periods went close to up to 40 years in time. It was obvious that there was no way we could use traditional CGI methods to do this, suggests visual effects supervisor Kevin Baillie. It was not going to be economical and it was not going to be fast enough, and it also risked us falling into the Uncanny Valley. There was just no way we could keep that level of quality up to be believable and relatable throughout the entire movie using CG methods.We also didnt want to bring tons of witness cameras and head-mounted cameras and all these other kinds of technologies onto set, adds Baillie. That sent us down the road of AI machine learning-based approaches, which were just becoming viable to use in feature film production.Test footage of a de-aged Tom Hanks by Metaphysic, which aided in green-lighting the film.With that in mind, the production devised a test in November 2022 featuring Hanks. The actor (who is now in his late 60s) was filmed performing a planned scene from the film in a mocked-up version of the set at Panavision in Thousand Oaks. We had a handful of companies, and we did a paid de-ageing test across all these companies, outlines Baillie. One test was to turn him into a 25-year-old Tom Hanks just to see if the tech could even work. At the same time, we also hired a couple of doubles to come in and redo the performance that hed done to see if we could use doubles to avoid having to de-age arms and necks and ears.Metaphysic won, as Baillie describes it, the Pepsi Challenge on that test. When we saw the results we said, Oh my gosh, that looks like Tom Hanks from Big. What also became clear in that test was that the concept of using doubles to save us some work on arms and hands and neck and ears and things was never going to work. Even though they were acting to Tom Hanks voice and Tom was there helping to give them direction, it just was clear that it wasnt Tom. It wasnt the soul of the actor that was there. I actually think this will help to make people a little more comfortable with some of the AI tools that are coming out. They just dont work without the soul of the performer behind them. Thats why it was key for us to have Tom and Robin and Paul Bettany and Kelly Reillytheyre driving every single one of the character performance moments that you see on screen.Metaphysics approach to de-ageingThe key to de-ageing characters played by the likes of Tom Hanks and Robin Wright with machine learning techniquesMetaphysic relies on their bespoke process known as its Neural Performance Toolsetwas data. Ostensibly this came from prior performances on film, interviews, family photographs, photos from premieres and press images. Its based upon photographic reference that goes into these neural models that we train, outlines Metaphysic visual effects supervisor Jo Plaete. In the case of Tom Hanks, for example, we get a body of movies of him where he appears in those age ranges, and ingest it into our system. We have a toolset that extracts Tom from these movies and then preps and tailors all that, those facial expressions, and all these lighting conditions, and all these poses, into what we call a dataset, which then gets handed over to our visual data scientists.Hanks de-aged as visualized live on set.The raw camera feed.Final shot.I make the analogy, continues Plaete, that where you used to build the asset through modeling and all these steps, in our machine learning world, the asset built is a visual data science process. Its important to note that, ultimately, the artistic outcome requires an artist to sit down and do that process. Its just a different set of tools. Its more about curation of data, how long do you expose it to this neural network, what kind of parameters and at what stage do you dial in? Its like a physics simulation.Metaphysics workflow involved honing the neural network to deliver the de-aged characters at various ages. There is an element of training branches of that network where you start to hone in onto subsections of that dataset to get, say, Tom Hanks at 18, at 30, at 45, says Plaete. Eventually, also, we had some networks that aged Robin into her 80s, which was a slightly different approach even though its the same type of network. At the same time, we have our machine learning engineers come in and tweak the architectures of the neural networks themselves.Such tweaking is necessary owing to, as Plaete calls it, identity leak. You get Toms brother or cousin coming out, instead. Everybody knows Tom extremely well, so you want to hit that likeness 100%. So we have that tight collaboration from the machine learning engineers with the visual data scientists and artists to bring them together. They tweak the architecture, they tweak the dataset and the training loops. Together, we review the outputs, and ultimately, at the end of the day, we are striving for the best looking network. But rather than hitting render on a 3D model, we hit inference on a neural net, and thats what comes out and goes into compositing.On set, Metaphysic carried out a few short data shoots with the actors it would be de-ageing (and up-ageing) in the lighting of the scene. That just involved capturing the faces of the actors and perhaps having a little bit of extra poses and expression range to help our networks learn how a face behaves and presents itself within that lighting scenario, explains Plaete. Ultimately, we have a very low footprint.De-aged, liveThe de-ageing effects were not carried out in isolation. Bob was very excited to involve the actors in reviewing the de-aged faces, recounts Baillie. Id show them, Okay, heres you at 25, what do you think? They had a hand in sculpting what their likeness was like. I remember in particular Robin when I first showed footage of her de-aged to 25and the plate for this was I just sat with her and had a conversation for a couple of minutes and we filmed it on a RED and then went away and a month later I showed herthat it was really emotional for her. She said, Ive been thinking about how to bring the innocence of my youth back into my eyes, back into my expression and suffering over that. This helped me do it. Thats all that the AI knows, is that innocence of her youth. For her, I think there was just this moment of realization that this can help her get back there.Part of Metaphysics workflow is feature recognition, which detects the actors physiology in basic outlines.Final shot.Indeed, this helped drive an effort on set for a preview of the de-ageing to be available to the cast and crew. That helped us to make sure that the body performance of the actor matched the intended age of the character at that time, says Baillie. Its very hard to judge that if youre not seeing it. Every time Bob would call cut, Tom would run back around behind the monitors and watch himself and be like, Oh, I need to straighten up a little bit more. Oh, I was shuffling a little bit, or maybe I was overacting my youth in that one. It became a tool for the actors that they were able to use, and Bob was able to use it, and even our hair and makeup team and costume design team were all able to use it.Metaphysic already had an on-set preview system in the works before Here, but ramped it up on the production when Baillie asked the studio if it could be done. I think a week or two later, recalls Plaete, we hopped on a Zoom call and we had a test live face swap running into the Zoom to show him an example. Kevin said, Yeah, we should do it.The real-time system worked using only the main camera feed, without any special witness cameras or other gear, notes Baillie. It was just literally one SDI video feed running off of the camera, out a hole in the side of our stage into a little porta-cabin that theyd set up next to the set. Thats where all the loud, hot GPUs were sitting, and Metaphysic had a small team of four people that were in that cabin.The real-time budget was about 100 or 200 milliseconds, adds Plaete. We needed to take all our normal steps, optimize them, and hook them together as fast as we could. That was a bit of a hackathon, as you can imagine. But ultimately, it meant training models that were lower resolution. Still, the inference would run fast. I mean, the inference of these models is fast anyway. The high resolution model will still take a second to pop out the frame, which is crazy different from the 25 hours of ray tracing that we come from [in visual effects].The team had built a toolset that carried out a computer vision-like identity recognition pass so that the de-ageing could occur on the right actor. Those recognitions would hand over their results to the face swapper, details Plaete, which would face swap these optimized models that would come out with this type of square that you sometimes see in our breakdowns, and a mask. That would hand over to a real-time compositing step, which is an optimized version of our proprietary color transfer and detail transfer tools that we run in Nuke for our offline models, but optimized, again, to run superfast on a GPU, and then hooking that all together. Wed send back a feed with the young versions.A monitor displaying Metaphysics identity detection that went into making sure that each actors younger real-time likeness was swapped onto them, and only them, during filming.We had one monitor on set that was the live feed from the camera and another monitor that was about six-frames delayed that was the de-aged actors, outlines Baillie. When we did playback, wed just shift the audio by six frames to give us perfectly synchronized lipsync with the young actors. It was really, really remarkable to see that used as a tool. Rita, Toms wife, walked on the set and was like, Oh, my gosh, thats the age he was when we first met. It was lightweight. It was reliable. Its the most unobtrusive visual effects technology Ive ever seen used on set, and it had such an emotional impact at the same time.Plaete adds that he was surprised to see Zemeckis constantly referring to the monitor displaying the live face swap, rather than the raw feed. It was the highest praise to see if a filmmaker that level used that tool constantly on the set. The actors themselves, as well, would come back after every take to analyze if the facial performance with the young face would work on the body.The art of editing a machine learned performanceOne challenge in using machine learning tools for any visual effects work has been, thus far, a limited ability to alter the final result. Some ML systems are black boxes. However, in the case of Metaphysics tools, the studio has purposefully set up a workflow that can be somewhat editable.Tests of various AI models for capturing Wrights younger likeness. Note the difference in mood between the outputs, which needed to be curated and guided by artists at Metaphysic.In addition to compositors, advises Baillie, they even have animators on their team. But instead of working in Maya, theyre working in Nuke with controls that allow them to help to make sure that the output from these models is as close to the intent of the performance as possible.I call them neural animators, points out Plaete. Theyre probably the first of their kind. They edit in Nuke, and its all in real-time. They see a photoreal output, and as they move things around, it updates in real-time. They love it because they dont have the long feedback loop that theyre used to to see the final pixels. The sooner youre in this photoreal world, the sooner youre outside of the Uncanny Valley and the more you can iterate on perfecting it and addressing the things that really matter. I think thats where the toolset is just next level.Sometimes the trained models will make mistakes, such as recognizing a slightly saggier upper eyelid for some other intention. Our eyelids as we age tend to droop a little bit, and these models will misinterpret that as a squint, observes Baillie. Or in lip motion, sometimes there might be an action that happens between frames, especially during fast movements when you say P, and here the model will actually do slightly the wrong thing.Its not wrong in that its interpreting the image the best that it can, continues Baillie, but its not matching the intent of their performance. What these tools allow Metaphysic to do is go into latent space, like statistical space, and nudge the AI to do something slightly different. It allows them to go back in and fix that squint to be the right thing. With these animation tools, it feels just like the last 2% of tweaking that you would do on a CG face replacement, but youre getting to 98% 10 times as fast.You can compare it with blend shapes where you have sliders that move topology, says Plaete. These sliders, they nudge the neural network to nudge the output as a layer on top of the performance that is already coming through from the input. You can nudge an eyeline, for example. Bob likes his characters to flirt with the camera but obviously not look down the barrel. These networks tend to magnetically do that. Eyeline notes would be something that we get when we present the first version to Kevin and Bob, and theyd say, Okay, maybe lets adjust that a little bit.Dealing with edge casesAnother challenge with de-ageing and machine learning in the past has been when the actor is not fully visible to camera, or turns their head, or where there is significant motion blur in the frame. All these things had to be dealt with by Metaphysic.Paul Bettany, de-aged.We knew that that was going to be an issue, states Baillie, so we lensed the movie knowing that we were going to have a 10% pad around the edges of the film. That meant the AI would have a chance to lock onto features of an actor if theyre coming in from off-screen, so that we werent going to have issues from exiting camera or coming onto camera.A kiss between two de-aged characters proved tricky, in that regard. The solution here was to paint out the foreground actors face, do the face swap onto the roughly reconstructed actor, and then place the foreground person back on top. Or, when an actor turned away from camera to, say, a three-quarter view, this meant that there would be less or no features to lock onto. What the team had to do in that scenario was track a rough 3D head of the actor onto the actor and project the last good frame of the face swap onto it and do a transition and let that 3D head carry around the face in an area where the AI itself wouldnt have been able to succeed, outlines Baillie. All these limitations of the AI tools, they need traditional visual effects teams who know how to do this stuff to backstop them and help them succeed.To tackle some of these edge cases, Metaphysic built a suite of tools they call dataset augmentation. You find holes in your dataset and you fill them in by a set of other machine learning based approaches that synthesize parts of the dataset that might be missing or that are missing, discusses Plaete. We also trained identity-specific enhancement models. Thats another set of neural networks that we can run in Nuke and the compositors have access to that. Thats basically specific networks that can operate on frames that are coming out impaired or soft and restore those in place for compositors to have extra elements that are still identity-specific.All of Metaphysics tools are exposed in Nuke, giving their compositors ways of massaging the performance. They can run the face swap straight in Nuke via a machine learning server connection, and they can run these enhancement models, explains Plaete. They have these meshes that get generated where they can do 2.5D tricks or sometimes they might fall back onto plate for a frame where its possible. Theres some amazing artistry on the compositing side.Ageing upwardsMost of Metaphysics work on the film related to de-ageing, but some ageing of Robin Wrights character with machine learning techniques did occur (from a practical effects point of view, makeup and hair designer Jenny Shircore crafted a number of makeup effects and prosthetics approaches for aged, and de-aged characters).Here, Wright appears in old-age makeup, which is then compared with synthesized images of her at her older age, which were used to improve the makeup using similar methods to the de-aging done in the rest of the film.For the machine learning approach, a couple of older actors that were the right target age were cast who had a similar facial structure to Wright. Metaphysic then shot an extensive dataset of them to provide for skin textures and movement of the target age. We would mix that in with the oldest layer of data of Robin, states Plaete, which we would also synthetically age up by means of some other networks. Wed mold our Robin dataset to make it look older, but to keep the realism, wed then fuse in some people actually at that age. Ultimately, this was run on top of a set of prosthetics that had already been applied by the makeup department.Plaete stresses that collaboration with hair and makeup on the film was incredibly tight, and important. You want the makeup that they apply to be communicating a time or a look that would settle that scene in a certain place within the movies really extensive timeline. We had to be careful with our face swap technology that is trained on a certain look from the data, from the archival, from the movies, that we wouldnt just wash away all the amazing work from the makeup department. We worked really closely together to introduce these looks into our datasets and make sure that that stylization came out as well.Read the full issue on Here, which also goes in-depth on the virtual production side of the film.All images 2024 CTMG, Inc. All Rights Reserved.The post Here: A test with Tom Hanks appeared first on befores & afters.
    0 Comments ·0 Shares ·90 Views
  • Behind the VFX for Dune: Prophecy
    beforesandafters.com
    A new official video featurette is out.The post Behind the VFX for Dune: Prophecy appeared first on befores & afters.
    0 Comments ·0 Shares ·117 Views
  • How DNEG crafted the troll and the Eregion battle in s2 of The Rings of Power
    beforesandafters.com
    Today on the befores & afters podcast, were diving into season 2 of The Lord of the Rings: The Rings of Power, with DNEG and visual effects supervisor, Greg Butler. This season, DNEG delivered over 900 shots and led the work in some of the biggest battle sequences that happen around Eregion. This also involves Damrod the Hill Troll. With Greg, we look at what was filmed for these battles, the CG environments, digi-double soldiers and Orcs, CG horses, and the specific approach to the atmosphere in the scenes.This episode of the befores & afters podcast is sponsored by SideFX. Looking for great customer case studies, presentations and demos? Head to the SideFX YouTube channel. There youll find tons of Houdini, Solaris and Karma content. This includes recordings of recent Houdini HIVE sessions from around the world.Listen in above, and below, check out some fun before and after images and a video breakdown.The post How DNEG crafted the troll and the Eregion battle in s2 of The Rings of Power appeared first on befores & afters.
    0 Comments ·0 Shares ·125 Views
  • Yep, Wt FX did it, they turned Robbie Williams into a chimpanzee
    beforesandafters.com
    Behind the scenes of Better Man.In Michael Graceys Better Man biopic film, pop singer Robbie Williams is portrayed as a chimpanzee. Wt FX was responsible for the digital character, which ranges in ages and also goes through 250 different costume changes and 50 separate hair styles (and, yes, even sports Williams trademark tattoos).On set, actor Jonno Davies performed the role of Williams through the use of performance capture, largely following the workflow Wt FX has employed on the Apes films and, of course, from its long history of bringing various CG creatures to life.befores & afters got to chat to Wt FX visual effects supervisor Luke Millar and animation supervisor David Clayton to walk through, step-by-step, the making of ambitious project, which includes several dazzling musical numbers and perhaps the most f-bombs by a CG character in the history of film.It started with a huge previs effortThe musical momentswhich include an ever-increasing dance number around Londons Regent Street, a 100,000+ audience-filled Knebworth Park concert, and a performance at Royal Albert Hallwere previsualized by Wt FX before any other visual effects work commenced, and even before the film was fully greenlit.Michael Gracey was very keen to previs the musical numbers, time them all out to the music, and really get the details of all the transitional shots and how the ebb and flow of the visuals and sound would work together, to really chase down that emotional connection, explains Clayton, who oversaw the previs. It was really fun work because we already had the template of the soundtrack. Theyd also story-boarded some moments and used video-vis of others to piece together the sequences. We were then able to layer on more detailed previs and explore camera design compositions and action.Wt FX utilized its own motion capture stage as part of the previs process. Gracey also visited the studio in Wellington during this stage of production, where he was able to help block out action and iterate on the all-important virtual camera and lighting cues. That previs was the first step of showing people what this movie could be, observes Millar. It was definitely a catalyst that helped with getting the movie funded and advancing to shooting and production.The performance capture methodologyArmed with a previs of the key parts of the film, Wt FX then helped Gracey establish how it would be shot. The most important thing for me was to shoot this like a regular picture, says Millar. I said to the team, We shoot it like Jonno is in the movie. We light it like Jonnos in the movie. We frame up like Jonnos in the movie. We pull focus like Jonno is the person who will be in the final picture.Plate footage from the shoot on location in Serbia.Animation pass comparing the digital characters facial and body performance to Robbie Williams from the original Knebworth concert.Animation pass showing the progression from Jonno Davies original performance, through to the final digital character.Final render depicting the iconic Knebworth concert.Davies was captured in an active marker performance capture suit. However, with the handheld camera work in the film, and some close and intimate action that needed to be captured, sometimes decisions were made to rely less on the technology and concentrate on the performance.Theres a scene where it becomes clear that Robbies nan (Betty, played by Alison Steadman) is getting dementia, relates Millar. It was a very powerful scene to watch them shoot. At the end of it, Nan embraces Robbies head and strokes his hair. In the first couple of takes, Jonno was wearing a motion capture helmet with little bobbles on it, and Alison Steadman was trying to figure out what she could and could not touch. You could see her trying to stroke these plastic bobbles and it was killing the moment. In that instance, we said, Lets lose the helmet. We ended up sacrificing the technology in order to make that moment the incredibly touching and intimate moment it is in the movie. Even though it creates more work on the back-end for animators who will obviously have to translate Jonnos facial performance by hand rather than being able to solve it based on a camera rig, we cant fix a performance that doesnt feel convincing in the photography.For Davies performance as Williams on set, the actor referenced countless hours of the singers past performances. He also had the benefit of Williams being on set for the first couple of weeks of the shoot. This included for filming of the finale My Way. We got Robbie rigged up in a mocap suit and he came out and did the performance, shares Millar. The performance was incredible. There was the level of engagement from all the extras. Everyone was just so, so good. But he missed half the lines. He wasnt in the right part of the stage. He wasnt looking iin the right direction when he should be. It was a very clear thing that hes a great entertainer that absolutely shone on the stage, but he is not an actor. He did run through a few scenes and we did get a lot of great reference material to see how he moved.Building chimp RobbieThe CG chimpanzee version of Robbie Williams was crafted to resemble the singer, particularly his notable eyes and brows. Wt FX relied on photogrammetry scans, texture reference shoots and facial poses to build up their model and puppet. When building him, notes Clayton, we wanted him to feel a bit like Robbie and have the charisma and some signature looks of the real Williams, but we didnt want it to be a funny monkey face version of Robbie Williams. So, we respected the line of the eyebrows and the shapes of the eyes, but it needed to feel very much like a chimpanzee first and then the likenesses, we just tried to ease them in there.Animation pass of the facial and body performance.Creature pass highlighting the textures of Robbies outfit, including wig, hat and clothing.Lighting pass.Animation pass compared to reference footage from the child and adult actors.Final render of ape Robbie Williams as a child in a school play.To test the model, Wt FX created some side-by-side performances with real footage of Williams from past interviews, including those where, as Clayton notes, Robbie is being quite genuine and in the moment when responding to questions. When we put that onto our digital Robbie and it really worked, that was a breakthrough moment where its like, Oh, this is going to sell. I mean, it is true that if Robbie were an animal, he would be a monkey. Hes cheeky, hes an entertainer. Hes quite sharp and in the moment and spontaneous.While Wt FX has extensive experience in crafting apes, chimpanzee Robbie Williams was a different kind of challenge to previous projects. In the Planet of the Apes franchise, details Millar, the apes start off as chimps and slowly evolve to become more human. Whereas, in Better Man, were basically representing a human being as a chimp. Everything a human being needs to do, Robbie needs to dosing, be emotional, angry, happy. The full range of human emotions.There was also a huge amount of dialogue, adds Clayton, mentioning the intense swearing required, too. Hes a chatterbox and hes in pretty much every shot of the movie and driving the narrative, so hes talking a lot, and that needed to feel convincing.In terms of animating the character, breathing became a central part of the process. Breathing is a big part of singing and speaking and performing, confirms Clayton. I was always eyes on with the breathing controls to make sure that the inhales were happening, then cascading down through the exhales as hes talking or singing, before another intake of breath, and away you go again. Thats such a big part of making a digital character, getting that convincing breathing pass in there. Nostrils, too. Humans dont really flare their nostrils a whole lot. Here, we could use it as a way to just bring a variety and a contrast to the movement of the face and make him flare his nostrils to reflect certain emotional beats and add a complexity and a nuance to the landscape of his face.With so much reference of Robbie Williams for everyone to pull from, Wt FX artists and Jonno Davies became extremely adept in collaborating to craft the performance of ape Robbie. Clayton details: It is actually fun in that way. We were not inventing this new character, well, we were inventing a new version of the character, but we were are also retelling historical events of something thats happened. Robbies very cavalier. Hes not trying to be famous. Hes not trying to pander to people. Hes just being himself so that genuine charisma is always there. As animators, its one of the first times as weve got try to replicate this reality, this genuine, charming reality. It was very cool.One of the significant aspects of the chimpanzee build was representing Williams real-life hairstyles and tattoos in the character. The digital ape model was made up of 1,356,167 strands of fur, with 225,712 of those strands being shaved to replicate Williams tattoos. Says Millar: We went through and pulled different hairstyles from Robbies life over the years and mapped them to the eras as they appear in the movie. Initially, we just tried to take a human haircut and block it on his head, which looked terrible. It looked like an ape wearing a wig, which is not where we wanted to be so we ended up going back to more the chimpanzee hairline and shaving the hairstyle. The direction I gave to the team was, Imagine a chimp grew out their hair and then went into a barber and said, Make me look like Robbie Williams. What would the barber do?In one particular scene, Williams is shown with bleached blonde hair. The first pass that the groom artist did for that, discusses Millar, was that they bleached his hair and then put this line around the back where a human hair line would end. Well, if youre a chimp, why would you stop there? So we ended up bleaching the whole body.A similar methodology was relied upon for the tattoos, shares Millar. Rather than just placing ink under the skin for a regular tattoo, we ended upbecause you wouldnt see them because of all the furwe ended up shaving them into the fur, like hair art. It was a very challenging groom situation, not one we are usually presented with. Our artists did a fantastic job of replicating all that detail as different lengths of density and lengths of fur.The more than 200 Robbie Williams costume changes in the film required Wt FX to collaborate closely with the costume department. They sourced, made, borrowed, rented every outfit that you see in the movie, advises Millar. We scanned them all. We used them on set for reference, but essentially none of those costumes were ever going to actually be in front of the camera. So we had to make all those unique costumes, and then add in more variations for the fight where Robbie is fighting a whole load of different versions of himself.Rock DJ: crafting the Regent Street onerIn a three minute and 42 second long (5,334 frames) oner, Williams with band Take That are shown having signed their first record deal and bursting out onto Regent Street in London to celebrate. As the Rock DJ dance and musical sequence progresses, they are initially not that well known, so few people around them react. However, as the group transitions into different looks throughout their careers and continue dancing down the street, more and more people are swallowed into the celebration and a flash mob-like dance ensues.Vid-ref acquired by animators under Clayton proved critical for imagining the sequence in previs form. We definitely didnt shy away from embodying the character, lets say, admits Clayton. Especially in the previs, a bunch of us went into some of the musical numbers. We learned the dance moves that were going to be done in Regent Street. I mean, you could just key frame that in a simple way, but it doesnt give you bearings the same as if youve got real motion capture, even if its from computer nerds, such as myself, dancing down Regent Street. We had a great motion capture day where we broke the whole musical number into about 20 parts, and we just captured them.Our lead who played Robbie for the previs was Kate Venables, adds Clayton. Shes a dancer, so she nailed it, but the rest of us were making the best of it. When you put that motion capture into the Lidar scan of Regent Street that we had, all of a sudden it just springs to life. You can check your lenses, you can check your camera moves. Everything just starts to feel infinitely more real.The previs was provided to director of photography Erik A. Wilson. He went down Regent Street with an iPhone trying to map out the path that Dave had come up with, states Millar. There were certain ways that he couldnt quite move the camera as in the previs, so we figured out a physical path that we could actually take down the street. There was then a techvis path after that to further figure out how to move the camera and whether it was going to be crane, human mounted, et cetera, and to figure out the lensing which varied throughout the sections.A four day night shoot in Regent Street followed to film the plates, with Davies performing as Williams and many dancers also on set. To animate the CG Williams over the nearly four-minute sequence, Wt FX split it into multiple parts. We were able to do the regular treatment of overlaying our Robbie ape over the top, says Clayton, but there was a lot of scrutiny from Michael and his team, as there should be. Its one of the high points of the film, a real centerpiece, super ambitious, so it needed to look as perfect as we could make it.We had to pay particular attention to some transitional moments, say when he spins around and does a costume change, continues Clayton. Although here we could have relied on computer graphics cheats to fade things on and off, we didnt want to go that way. We wanted to make it feel like it could have really been done in camera and all the imperfections that go with that. It was the same with the moment he jumps onto a taxi and then on the back of an iconic London double-decker bus. The physicality of that needed to be the priority. We never wanted to venture into superhero looking stuff.A major effort was also involved in stitching together plates and providing for building and set extensions. The action goes inside and outside shops on Regent Street. Interiors were filmed in Melbourne prior to the London shoot, and would need to be married up to the shop fronts on Regent Street. Wt FX added in digital traffic including buses and cars. Combining all the different interior and exterior plates, and CG elements, was a massive task, says Millar. I think its got the record for the most amount of roto-tasks ever created here at Wt FX.The shot also has many period-correct components, adds Millar. We had control over a few of the shopfronts, so they were dressed, and then there were the ones which we werent allowed to touch, so they had to be replaced, including right at the end of Regent Street which is Piccadilly Circus. Back in the 90s, it was a huge advertising board with fluorescent tubes, whereas now its an LED screen. We had to replace that and take it back to its 90s look. There was one building that just happened to be under renovation. It was covered in scaffolding, so we then had to patch that so it was back to looking pristine again. Because were transitioning through time as were going down the street, by the time we get to the end, its Christmas time. That meant we had to put all of the iconic Christmas lights down Regent Street and put Christmas decorations in the windows. There was a lot of augmentation work to tell that story as well.Let me entertain you: The Knebworth Park concertWilliams enormously attended 2003 Knebworth Park concert was re-created in Serbia. This location allowed production to film with around 2,000 extras, while a further 123,000 would be added in as digital crowd members by Wt FX. During the shoot, Millar helped co-ordinate the shooting of small chunks of 50-people sized crowds for close-up shots around the stage. It turned out, he says, with 2,000 extras, we could actually get most of the medium and the close-ups all in camera without needing to go digital. Essentially what we did was put on a gig in Serbia. The stage wasnt really a set, it was rented from an actual stage company that built it for concerts. The same with the lighting. We were on a studio backlot, but it was essentially a music festival that played half a song and nothing else for the entire four days that we were there. All of the band and stage workers are in-camera, and then its the wider extension and the big crowds that became digital extensions after that.Some archival footage from the actual Knebworth Park event was able to be intercut with the Serbia scenes. That gave us a great goal to match to as well, both in terms of what was in the event at the time, but also the quality of the finished image, which essentially was early 2000s digital video, notes Millar. Clayton adds that the original footage informed Wt FX about what they called crowd detritus, objects like inflatable toys, flags and beach balls, that would be included in the scenes. Disposable cameras and film cameras were also elements added .Shes the One: dancing on a yachtAnother musical number occurs on a yacht in Saint-Tropez, where Williams and Nicole Appleton (Raechelle Banno) dance. This, of course, required a close level of interaction between the two characters. Its another centerpiece moment of the film where Robbie falls in love with Nicole, describes Clayton, but its also a beautiful interweaving of transitions and going forward in time, back in time and back to the yacht. The performance was paramount and getting that interaction and that feeling that theyre there together was very important. We match moved really accurately to Nicole Appletons actress, and then it was a lot of careful work to prioritize the feeling that theyre right there, theyre interacting seamlessly.Id say its the hardest work for sure when youve got a real person and a digital person and theyre that intertwined, notes Millar. That was all captured live. Its worth noting that Raechelle, who plays Nicole, did all the dance work herself. Some people have asked us, Did you replace her head?, but we didnt. For Robbie, it was a dance double who wore a very tight-fitting mocap suit with blue fabric all over the top. That let us get the actual mocap data from that shoot.The animation team would then animate chimp Robbie, knowing that additional work would be required to simulate clothing and solve for hands and other close interactions. Once Daves got it as close as he could, explains Millar, it then literally came down to going through frame by frame and saying, Okay, this bit here needs to be smoothed down on this frame, and we need to pull that bit tight and create a hand impression here when she puts her hand there. It just becomes very painstaking detail work, which is the detail that you dont notice when you watch it, it just flies past. But if it wasnt there, it would look weird and fake.Clayton makes a point of mentioning a moment during the yacht dance where Nicole runs her hand through Robbies hair. Because weve got the match move of her hand, we can have his hair simulating and responding to that hand. Here, too, we had to deal with ape Robbies ears, which are quite big. That would come up quite often, more often than you might think, actually. He would touch his own ears and so we had animation controls for them. Other people might brush against them, so we just flopped them out of the way.My Way: At Royal Albert HallFor Williams performance at Royal Albert Hall, production filmed in two halves. First, a replica stage and floor area was built at Docklands Studio in Melbourne. We had a full orchestra and all the extras sitting around the tables in front of the stage were all part of that shoot that took place in Melbourne, details Millar. Robbie then had a concert about 10 months later at the real Royal Albert Hall where it was requested that everyone came wearing black tie, so we shot corresponding plates for everything that we filmed in Melbourne during that concert in the Royal Albert Hall.Plate footage of the performance on set in Melbourne with Jonno Davies.Plate footage of the on set audience at the Royal Albert Hall with the real Robbie Williams (L) and director Michael Gracey (R) centre stage.Lighting pass showing the detail of ape Robbies costume and hair.Final render of ape Robbie Williams closing performance at the Royal Albert Hall.This meant that wider views of the concert generally included the London audience, while Davies performance of Robbie, the audience on the ground, and the orchestra pit all acquired in Melbourne, complete with extras. Wt FX then combined the plates to ultimately produce an audience of 5,500 people.The workflow required a higher level of planning to establish where to place cameras in the Melbourne set that could match the shooting of the real Albert Hall concert months later. We were able to acquire a quick scan of our set and a quick scan of the real Royal Albert Hall and stick them on top of each other to line the two references up, outlines Millar. Then I could say to the DP, Okay, if you stick your camera eight meters up from this point here, then you should end up in box 36. For a lot of these key angles, when we were in that space, we had to make sure that we were in legit places where we could get a camera. Production needed to know this information right away so those seats didnt get sold when the Royal Albert Hall tickets went on sale! After the Melbourne shoot, an edit was done of the performance to help further figure out what real Albert Hall plates needed to be filmed during that concert. I think it was about 30 or 40 shots that we would need to shoot during the live concert, says Millar. The way it worked was, Robbie would come out, do half of his set, then he would disappear off. We would get four minutes to do our take with a series of colored lights on poles, which would be for eyelines because the stages were different heights and shapes. And then there was a voiceover for what the audience had to do, listen and sway or stand up and applause or things like that, which they would, whilst we went through the different lighting scenarios that take place during the musical number.The Royal Albert shoot involved some frantic moments, admits Millar. We only had two nights for this. The first night was a write-off because it was Sunday, and the British public got absolutely blind drunk throughout the whole day, so they didnt look where they were supposed to be looking! It basically left us with one night and one four-minute take to get every single shot that we needed to get for that scene. In the end we did it, but it was by far the most stressful shoot Ive ever been a part of. The beauty of it is, when you watch the scene, you can just tell that we are in the Royal Albert Hall, you can tell that those people are real. Theres a certain physicality about that whole thing, which you only get from when you are actually in that space.An additional challenge to the Royal Albert Hall sequence came in terms of lighting, that is, Wt FX needing to match the real stage lighting with their digital lighting. To help do that, Millar recognized that the stage shots were effectively a controlled environment where lighting was timed to a music time code, and therefore repeatable. So, he says, rather than halt filming to wander out with our balls and charts after every take, we said, well, we could use this. On previous shows, Ive always talked to lighting board operators and said, It would be great if your world and our world could somehow combine. Theyve always given us these files and Ive got back to base, looked at them and cant make head nor tail of what they are. But on this movie, it was absolutely critical that we could, because theres literally gantries with 50 to a hundred lights in them, and trying to replicate that after the fact without actually having that information is really hard.I sat down with the concert lighting board operator and talked through how their world works, continues Millar. Then we extracted all of this data from them, brought it back to Wt FX, and then some very clever people took that data and were able to replicate the lighting of the concert within our world. There were certain things that we couldnt do. Things like the brightness or colors of lights, the worlds just dont align. So for those, we just shot HDRIs of static lights that we could then source and apply it to the moving lights and also the physical space of a light. In real life, you place it somewhere, whereas obviously in the computer, you need to know where that light is. Lidar got us the position, HDRIs got us the color and the intensity of the lights. It was very satisfying to see it come to life for that first time because you see how many lights are in this thing, and suddenly theyre all moving and theyre doing exactly the same thing that they did on the day. We had all the components then to recreate the whole concert in the computer for when Robbies running around on stage, which was really cool.When your VFX supervisor becomes a star of the filmFor several scenes requiring digital extras, Wt FX happened to place visual effects supervisor Millar into, well, a lot of them. In fact, Millar served as a digital extra 767 times in the film. It wasnt for any sort of narcissistic tendencies, he protests. It was literally because we had two distinct needs for digital people. One of them was the underwater fans in Come Undone, and the other one was the paparazzi in Come Undone as well. It just so happened that throughout principal photography, whenever we had paparazzi extras, we were always on location, so we werent able to scan any of them. Wed get back to the studio and go, We need a paparazzi! And the requirement would be someone whos average build male, middle-aged, and a bit of a creep. And I was like, Oh, I can do that.Millar brought into the studio a collection of different clothes, whereupon he was scanned in different outfits. Whilst the intention was to just put me in that one scene, once I existed, I ended up everywhere. Normally Im the security guard or a bus driver or, motorcyclist, or paparazzi. Im in the movie about 700 times.Whats really funny is, adds Clayton, sometimes the extras would be my motion capture, but Lukes digital double body, merged together. Our powers combined!All images: 2024 PARAMOUNT PICTURES. ALL RIGHTS RESERVED.The post Yep, Wt FX did it, they turned Robbie Williams into a chimpanzee appeared first on befores & afters.
    0 Comments ·0 Shares ·103 Views
  • A new challenge: the Offspring
    beforesandafters.com
    How Legacy Effects crafted the practical creature effects for the Offspring in Alien: Romulus. An excerpt from befores & afters magazine.The climactic encounter of the film occurs between the remaining characters and the fast-growing humanxenomorph hybrid: the Offspring. Romanian former basketball player Robert Bobroczkyi was brought on to play the tall and skinny creature, owing to his distinctive body features. Fede called us and said theyd seen Robert in some YouTube clips, and said, Do you think we could use this guy? Hes not an actor, he is an athlete. Do you think itd be a good idea? And we were like, Hell, yeah.When we were first talking about the Offspring and looked at storyboards, adds Mahan, it just looked like it was supposed to grow very fast and be like a teenage underdeveloped brain but a big, gawky thing with a lurking presence that doesnt really understand itself. The sub-base of what Robert is just naturally was going to be phenomenal. We could do the same makeup on a six-foot tall guy and it would just be okay. But Robert made it special and really made the ending tremendous. Hes 90% of the success of that creature.The Offspring make-up effects from Legacy consisted of 13 pieces of translucent silicone appliances, with portions of Bobroczkyis skin showing through. Mahan and MacGowan were particularly impressed with Bobroczkyis on-set acting, for someone who had not ever done this kind of work before. Says Mahan: Robert was phenomenal because he really took it to heart and really put the effort in to make a character. I think it was his idea to be smiling during some of it. He worked with the acting coach at his school. He worked very hard on creating the movement and character, and he just showed up ready to rock and roll.When Chris Swift and I, with the team, did his make-up test for the first time, we took him to second unit to shoot a test, recounts Mahan. We knew it was very, very special and we both said it was like when Karloff as Frankenstein walks through the door backwards. It was that magical. Fede and everybody had video monitors over on first unit and they could see us setting it up, and then everyone ran over to come to see it. They just couldnt believe it. Read the full issue of the magazine.The post A new challenge: the Offspring appeared first on befores & afters.
    0 Comments ·0 Shares ·136 Views
  • Behind the visual effects of Mufasa
    beforesandafters.com
    A new short video showcases the motion capture and visual effects work by MPC.The post Behind the visual effects of Mufasa appeared first on befores & afters.
    0 Comments ·0 Shares ·145 Views
  • Behind the scenes of Wallace & Gromit: Vengeance Most Fowl
    beforesandafters.com
    Nick Park and Merlin Crossingham discuss the film and showcase puppet making and animation.The post Behind the scenes of Wallace & Gromit: Vengeance Most Fowl appeared first on befores & afters.
    0 Comments ·0 Shares ·98 Views
  • Heres some ways one visual effects studio is using machine learning tools in production right now
    beforesandafters.com
    And its not only with the dedicated pro VFX tools you might think (its also with ones originally designed for just social media use).The topic on the top of so many minds in visual effects right now is artificial intelligence and machine learning. There are, quite simply, new developments every day in the area. But how are all these developments finding their way into VFX usage? befores & afters asked one studio owner during the recent VIEW Conference to find out what they are doing.Wylie Co. founder and CEO Jake Maymudes started his visual effects studio in 2015. He had previously worked at facilities including The Mill, Digital Domain and ILM. Wylie Co. has in recent times contributed to Dune: Part One and Part Two, Alien: Romulus, Uglies, The Killer, Thor: Love and Thunder, The Last of Us and a host of other projects. The boutique studio works on final VFX, sometimes serving as the in-house VFX team, and commonly on aspects such as postvis.The biggest change to visual effects that Maymudes has seen in recent times has come with the advent of new artificial intelligence (AI) and machine learning (ML) workflows. The studio has utilized deep learning, neural networks and generative adversarial networks (GANs) for projects. Some of this relates to dedicated VFX tools, other work, as discussed below, was even done with tools intended for just social media use.In terms of the tools now available, Maymudes is adamant that AI and ML workflows will (and already are) changing the way labor-intensive tasks like rotoscoping, motion capture and beauty work are done in VFX. Theres so much efficiency to be had by using AI tools, argues Maymudes. I see it as really the only way to survive right now in VFX by taking advantage of these efficiencies. I think the whole worlds going to change in the next couple of years. I think itll change dramatically in five. I think itll change significantly in two. I could be wrong, it could be one.Wylie Co. has leapt into this AI/ML world in both small and large ways. On She-Hulk: Attorney at Law, for example, Wylie Co. was utilizing machine learning rotoscoping in 2021 for postvis work on the series. Back then I wasnt aware of a single other company that was diving into machine learning like we were, says Maymudes. And now, weve all had that capability for years.The blue eyes of the Fremen in Dune: Part Two.A much larger way Wylie Co. used machine learning tools was on Dune: Part Two to tint thousands of Fremen characters eyes blue. That task involved using training data direct from blue tinting VFX work already done on Dune: Part One by the studio and feeding that into Nukes CopyCat node to help produce rotoscope mattes. Production visual effects supervisor Paul Lambert, who is also Wylies executive creative director, oversaw the training himself. Hes deep into AI and AI research, notes Maymudes. Hes a technologist at heart.[You can read more about Wylie Co.s Fremen blue eyes visual effects in issue #23 of befores & afters magazine.]Then, theres a different kind of approach Wylie Co. has taken with AI and ML tools that were not perhaps initially intended to be used for high-end visual effects work. The example Maymudes provides here is in relation to the studios VFX for Uglies. On that film, visual effects supervisor Janelle Ralla tasked Wylie with a range of beauty work to be done on the characters as part of the Ugly/Pretty story point. Ralla demonstrated a social media appFaceAppto Maymudes that she was using to concept the beauty work. The app lets users, on their smartphones, change their appearance.Original frame inside FaceApp.She used this app to generate the images to convey what she wanted to see, explains Maymudes. The results were really good, even for those concepts. So, I researched it, and it was an AI-based app. It had used a neural network to do the beauty work. And it did it fast.That was an important consideration for Maymudes. The beauty work had to be completed to a limited budget and schedule, meaning the visual effects shots had to be turned around quickly.After the FaceApp filter was applied.Heres what Wylie Co. did using the app as part of its workflow.We downloaded FaceApp, then brought in our plates, discusses Maymudes. I took the app and I made hero frames with the shots. Then I would take those hero frames into Nuke. I would create a dataset with these hero frames. Then I would train overnight on my Lenovo workstation with my NVIDIA GPUs for 12 hours. Id come back in the morning, click a couple buttons, apply the inference, and it worked.Nuke node graph for the beauty work.We figured out a good workflow for this work through trial and error, adds Maymudes. You have to be very explicit with what you want to tell these neural networks because its one-to-one. Youre basically saying, Please do exactly this. And if your dataset is messed up that youre training with, your results are going to be either really bad or not great, but not perfect, no matter what because its so one-to-one. Its so black and white. Thats why using FaceApp was great in this regard because it was so consistent between the hero frames.Why Maymudes is excited for this particular use of an AI/ML tool is that it was actually designed for something elsejust a fun social media purpose. But, he says, it has amazing facial tracking for face effects and gags. I mean, a lot of these tools do now. Theres a lot of R&D that has gone into these tools, especially ones relating to your face. Because of that, you can pick and pull little tools here and there to use in visual effects. And if you do that, you can find just insane efficiency. Thats why we used it.Original frame.Final beauty work.What we do love at our company are tools that make us better artists, continues Maymudes. We have machine learning tools that do re-timing, and upscaling, and morph cuts, beauty work, matte work. All these little things that kind of take the grunt work out of it, which is nice. But I dont think machine learning is going to stop there. Its going to transform our industry. I dont actually know where its going to go even with how much I research it and I think about it. Honestly, I think its completely unpredictable what visual effects or the world will look like in five years. But the stuff you can do now, well, its good, its useful. We use it.The post Heres some ways one visual effects studio is using machine learning tools in production right now appeared first on befores & afters.
    0 Comments ·0 Shares ·102 Views
  • The visual effects of Better Man
    beforesandafters.com
    A new video featurette on Wt FXs role in turning Robbie Williams into a chimpanzee.The post The visual effects of Better Man appeared first on befores & afters.
    0 Comments ·0 Shares ·96 Views
  • The big animated features are covered in issue #25 of befores & afters mag!
    beforesandafters.com
    Issue #25 of befores & afters magazine features candid interviews with the filmmakers behind some of the biggest animated features of 2024. Go behind the scenes of Inside Out 2, Moana 2, The Wild Robot, Ultraman: Rising, Transformers One, That Christmas and Wallace & Gromit: Vengeance Most Fowl.Here are the filmmakers befores & afters interviewed for this issue, each one at VIEW Conference 2024:Kelsey Mann, Director, Inside Out 2, Pixar Animation StudiosChris Sanders, Director, The Wild Robot, DreamWorks AnimationShannon Tindle, Director, Ultraman: Rising, NetflixHayden Jones, Overall VFX Supervisor, Ultraman: Rising, ILMSimon Otto, Director, That ChristmasJustin Hutchinson-Chatburn, Production Designer, That ChristmasAmy Smeed, Head of Animation, Moana 2, Disney Animation StudiosWill Becher, Supervising Animator and Stop-Motion Lead, AardmanRob Coleman, Creative Director & Animation Supervisor, Transformers One, ILM, SydneyFind issue #25 at your local Amazon store:USAUKCanadaGermanyFranceSpainItalyAustralia JapanSwedenPolandNetherlandsThe post The big animated features are covered in issue #25 of befores & afters mag! appeared first on befores & afters.
    0 Comments ·0 Shares ·95 Views
  • See the CG creature work crafted by Herne Hill for the demonic possession in The Deliverance
    beforesandafters.com
    Watch the VFX breakdown exclusively here at befores & afters.The post See the CG creature work crafted by Herne Hill for the demonic possession in The Deliverance appeared first on befores & afters.
    0 Comments ·0 Shares ·93 Views
  • Watch Scanlines VFX breakdown for Senna
    beforesandafters.com
    The post Watch Scanlines VFX breakdown for Senna appeared first on befores & afters.
    0 Comments ·0 Shares ·94 Views
More Stories