Befores & Afters
Befores & Afters
A brand new visual effects and animation publication from Ian Failes.
  • 3 persone piace questo elemento
  • 328 Articoli
  • 2 Foto
  • 0 Video
  • 0 Anteprima
  • News
Cerca
Aggiornamenti recenti
  • BEFORESANDAFTERS.COM
    Behind the tool Wētā FX developed to help ‘blockify’ geometry in ‘A Minecraft Movie’
    Plus, how key characters like the evil Malgosha and the wolf Dennis were made. When Wētā FX began venturing into A Minecraft Movie with director Jared Hess, one immediate challenge the studio faced was how to realize the ‘blocky’ style of the Minecraft world, from the video game the film is based on.  “We knew right from the get-go that it wasn’t necessarily something we could approach as traditional matte painting,” identifies Wētā FX visual effects supervisor Sheldon Stopsack, who worked with production visual effects supervisor Dan Lemmon on the film. “We couldn’t paint our way out of it for the extended vistas and scopes of the Overworld and the Nether. We knew we had to build this world from the ground up, from front to back in every single aspect and detail. Our models team came up with a prototype and idea fairly early on, which ended up being called the ‘Blockz’ tool.” The Blockz tool allowed artists to block out broad, and sometimes intricate, shapes with basic geometry. “What the models team then did was turn a watertight mesh into a closed volume, which became a point cloud, effectively,” explains Stopsack. “We then utilized this point cloud to instance individual blocks from our inventory to create a blockified version of the input geometry. The idea was to be able to utilize different block types and materials that are created just like in the game. And then, we assembled them. The tool gave us a lot of control about different scales and material propagation. We could even introduce an amount of jitter so that there was a little bit of imperfection. That all played a role in the sense of figuring out how stylized and how true to the game we wanted to go.” The next challenge was how accurately to represent landscapes inside the world with blocks themselves (as they are in the game). “We wanted to honor the game very much so we looked at, what if we build everything with a true-to-the-game block size, which might be a meter high each?” recalls Stopsack. “But, if you do something like that and then you build, say for the Overworld environment, a mountain in the background, which we called Mount Minecraft, it would be six kilometers high. So, if you were to build it true to the scale of the blocks in the game, you wouldn’t even see a block anymore. It’s so tall, it would basically be a subpixel blur and your block size wouldn’t necessarily work anymore.” For this reason, the studio decided not to represent assets in those exact block sizes, but instead play with different scales. It still resulted in something like Mount Minecraft being 3,434,127 individual cubes (ie. around 20 million faces), but with the ability to art direct the look a little more. “In the end,” says Stopsack, “we would bake the asset down and then we created a new hero holistic asset from that which we then could treat a little bit more organically with more textures and treat them so that they were not just replicas of each other. It was an ongoing journey to find the right balance of blockiness and stylization, yet also obeying some realism. We also had to combine this with live-action photography, so there was no escape.” How much ‘blockiness’ to go with was also a consideration for the characters that Wētā FX developed, such as Dennis the wolf. “He was a struggle to get right,” notes Wētā FX animation supervisor Kevin Estey, referring to translation of the wolf from a blocky character in the game to the film. “Every time we tried to round anything off like his smile or the corners of his eyes or anything like that, if you went a little bit too round, it almost got a bit uncanny in a weird way. It took a while to finally crack the code and understand that we had to stick to the squarish aesthetic with harder corners in the eyes and harder corners in the mouth.” A blockiness style applied to the animation as well, to some degree, advises Estey. “We did approach everything with the understanding that it needed to sit in reality with live-action characters, which meant we couldn’t go so stylized that we ended up doing, say, step animation or anything like that. But we did have freedom to explore different things that might not always work with standard visual effects in a live-action film.” This related further to facial animation in terms of the face shapes that Wētā FX built for Dennis. “I remember that was a big moment that we were in the middle of the shoot with the production team and we were still trying to figure out how to make Dennis not look kind of freaky,” shares Estey. “When we realized that the mouth shapes had to maintain the corners that we build into the facial animation, that was a big moment for understanding that the squarishness and the blockiness needed to travel through into the motion as well.” Making Malgosha Another standout character for Wētā FX was the Piglin ruler of the Nether, Malgosha (voiced by Rachel House). The studio referenced Mama Fratelli from The Goonies, Emperor Palpatine from Star Wars and the Skeksis from The Dark Crystal in their designs for Malgosha.  “She is so unique in having a super squared-off fridge-sized hunchback,” says Stopsack, who observes that bringing that kind of look to life was a tough task. “Her appearance was almost 90 percent cloak. The cloak played a pretty pivotal role. We were fortunate enough that the production’s costume department actually built a super-elaborate 35 kilogram heavy cloak for on-set reference. It was so heavy, it wasn’t practical to use for any sort of proxy actor to actively wear on camera, but it was good reference.” Wētā FX also engaged with creature performer Allan Henry, standing in for Malgosha on set. After principal photography, Henry was part of a motion capture shoot at the visual effects studio’s mocap stage that further aided in finding the character.  “Malgosha’s hunchback was something that spoke a lot to what her gait would be and how she’d carry herself,” describes Estey. “Allan was great in figuring out how to make her walk with a bit of a limp, even to the point where he would favor a certain side of his body. We always tried to make sure that the staff was on a certain side that was helping the limp. He would always carry in his other hand—we just called it the chicken claw. And so, that became something that was very synonymous with Malgosha’s character and it was fun because she kept hooking her finger inside the cloak. It was all those pieces of input that helped with our exploration of how you make this giant, hulking refrigerator with gangly limbs move in a realistic way, as well as Rachel’s input.” One particular sequence in the third act battle sees Malgosha, near defeat, try to convince Jeff to step closer to her so she can stab him with a series of concealed blades. Each attempt becomes more humorous than the last. “I remember laughing so loud on set when this was actually played out with Jack and Allan,” notes Stopsack. “It was absolutely hilarious.” “Yes, the entire crew of 100 people had to just hold their mouths, and then, as soon as they called cut, people would burst out into laughter,” recalls Estey. “Allan and Jack came up with the final line. Jack’s character throughout the film is always saying ‘sneak attack’ and they thought it would be funny if Malgosha also says it at the end when she does her final blade throw in the most useless and weak way. And that’s the thing that just had people absolutely rolling.” Estey mentions that, in addition to the hilarity of that moment on screen, Wētā FX was actually also able to craft some background action with some Piglins holding censers that played further into the slightly absurd nature surrounding the character. “This really speaks to the free-flowing nature of the production. We had a two week mocap shoot here at Wētā, and Jared came up to me and the stage manager and said, ‘I’ve got this really stupid idea, could we get two little Piglins that have censers with incense. I just think it’d be funny if they’re always next to Malgosha, no matter what.’ So, we mocked something up. And on set, we just started calling them ‘Priestie boys’ and then that became their name, internally.” The wackiness surrounding Malgosha did not stop there. At one point, she stabs a little Piglin who has made a drawing of an idealized house and garden. It turns the Piglin immediately into a pork steak. “Originally,” states Estey, “what was meant to happen was that she smacks the Piglin away and it’s meant to fall back and off-screen. We did a few like that on the mocap stage, but then I think Jared said, ‘What if we stabbed the Piglin?’ I’m like, ‘Can we do that? Is this a PG movie?’ Then I thought, ‘Well, what if the Piglin just turns into a pork chop on the end of the knife?’ So that’s what we did. I love that it made it because I thought, ‘I don’t know, stab a kid?’” The drawing the Piglin shows Malgosha just happened to be one made by a daughter of Stopsack, as he recounts. “We knew we had this drawing to do, so we encouraged kids to do some artwork. We actually called it, ‘We Need a Terrible Drawing.’ Not exactly an incentive to the child—’Hey, can you provide me with a terrible drawing?’ But, we had this art competition and all these kids of our co-workers did them. The original drawing was Malgosha and the Piglin holding hands, very cute, but Jared discarded the idea for a little while. Then it was back in, and I tasked my kids with, ‘Hey, can you give us a terrible drawing of a Minecraft house?’ So yet again, they went back to the drawing board and both of them chimed in and put their best foot forward, and we submitted it to Jared for review.” “And then, ultimately, he made a pick, but the pick didn’t go through without a note! I had to break the news to my daughter that she had to address the note. So she gave us another revision and we made another submission with that note addressed, and we submitted it to Jared and it got finally approved, which was great.” Crafting more characters In addition to Malgosha and Dennis, Wētā FX was responsible for designing a range of others featured in the film. These were also shared with the show’s other VFX vendors including Sony Pictures Imageworks and Digital Domain. The bee and the sheep are two of Estey’s favorites from the film, all the way from early movement tests. “We did these internal asset turntables where we presented a character, both internally and to the client. I decided to use that as a bit of a platform to have fun with the characters and see what we could do.” “And so,” adds Estey, “I had come up with this bee animation of him flying around a C-stand and then running into the camera, which made it into one of the trailers. That was just from an internal test. I really wanted to make him run into the camera and then knock it over. I even made him stain pollen on the lens. It was just to get a bit of a laugh out of everyone internally.” The sheep was a clear example of the way Wētā FX got to interpret the Minecraft universe for the film, flags Estey. “It was an example of the creative license and freedom that the animation team felt in creating some of the motion. There’s actually a moment where the sheep just randomly pukes up some grass that it’s been chewing. It was one of our animators who decided to try to give us a laugh by making it puke. I said, ‘I’m sending it.’ I knew Jared would probably love it and put it in the film, and sure enough he did.” “What’s fun about these characters,” comments Stopsack, “is that all of them were really treated as if they were the hero of the show. Each of them had their own unique little story. We even kept calling the Piglins by individual names, like Grunter, Trotsky, Snowball, Snout—so many. They all went through their own deserved cycle of working out, what is their appearance, body behavior and emotion?” ‘No longer the experts in the room’ While Wētā FX is used to solving complex visual effects problems, one aspect of working on A Minecraft Movie left the team perhaps more humbled than usual. “We were no longer the experts in the room,” admits Stopsack. “Typically, you are the supervisor, you go into the room, you give notes, and everyone expects you to have the answers. On this one, it was interesting because we were like, ‘OK, do this,’ and then, all of a sudden, a voice in the dailies review sessions would be, ‘Yeah, but in the game, it’s like this…’. And then, you’re like, ‘Okay, I learned a thing.’ We were constantly schooled about what is true in the game and what the rules are within Minecraft.” “We had so many people on the team that were just so savvy,” adds Stopsack. “They were consultants, really, because they’ve been living and breathing this game for most of their lives and they all chimed in. It was fantastic because it was a completely different level of engagement.”  Estey agrees, and to help honor the legacy of the video game he began playing it again during production with his son, now 14, but who originally played Minecraft when younger. “It helped me understand the ins and outs of it. I remember there was a dailies session where Sheldon was discussing the scene of Jack throwing some blocks to make a building. Sheldon was asking, ‘Should we maybe mix up the materials of these blocks?’ and I just chimed in by saying, ‘Well, make sure you don’t do dirt because dirt won’t stick on the side of wood, it’ll just fall to the ground.’ I really surprised myself, knowing this!” The post Behind the tool Wētā FX developed to help ‘blockify’ geometry in ‘A Minecraft Movie’ appeared first on befores & afters.
    0 Commenti 0 condivisioni 15 Views
  • BEFORESANDAFTERS.COM
    Capitol locations: the visual effects of ‘Zero Day’
    Go behind the scenes with VFX studio RVX. The political thriller mini-series Zero Day, currently streaming on Netflix, tells the story of a former US President (played by Robert De Niro) given the power to investigate a deadly ‘zero-day’ cyberattack. To help craft a number of scenes in Washington, D.C. and other key locations, production visual effects supervisor Douglas Purver and production visual effects producer Leah Orsini brought on Icelandic VFX studio RVX. Zero Day required a fast turnaround time–shot work began in September 2024 and was completed in January 2025–-to deliver 311 VFX shots. One of the first key locales RVX tackled was the exterior of the Capitol building. Plates had actually been filmed at a New York courthouse, with the team then responsible for extending an area of stairs to make it look like the front of the Capitol.  “I believe the stairs they shot at were a concrete structure, whereas the real Capitol building is marble,” outlines RVX visual effects supervisor Ingo Gudmundsson. “It was a lot of bringing the concrete a bit closer to marble and the other way around. We couldn’t do a full retexture of the practical set, but we did a pretty extensive cleanup, especially of the staircase and the columns. We had a really amazing comp team, led by our compositing supervisor Bragi Brynjarsson.” “We also had to do a fairly extensive rejigging of the proportions of the portico, the entrance, which was different to where they had filmed,” continues Gudmundsson. “I think there’s eight columns in the Capitol exterior, but there were 10 on the New York exterior. We squeezed in and shifted things around a little to make it work.”  Another challenge on those Capitol shots was lighting direction. “In some of the shots we were cheating the light direction a little,” says Gudmundsson. “In the plate, the sun was behind the courthouse building. We saw a lot of reflected light from different window panes in the high-rises behind the camera, and we had to cheat that as being light pouring through clouds. We had to figure out a way to light the dome and the rest of the building nicely and then figure out a way for those reflected light pools to look believable.” From RVX’s reel: driving comps formed part of the studio’s VFX work. Also in Washington, D.C., RVX carried out several driving comps that included adjustments to the city, owing to the story point of the area being in lockdown. “We had to remove any visible crowds, tourists milling about, that kind of thing,” notes Gudmundsson. “Sometimes they managed to place a few cop cars and military vehicles in the background, then they would ask us to add blinking lights to those or augment with CG vehicles as well.” Later, for scenes taking place in the U.S. Congress, RVX delivered set extensions and digital crowds. “Even though we knew this was featuring in the last episode, it was something that we needed to start right away,” recalls RVX visual effects producer Jan Guilfoyle. “So the House Chamber environment and associated CG crowd was already underway and going on in the background while we were delivering the early episodes.” RVX was provided with set measurements, scans and reference photography of the Congress set. This included scans of extras that could then be rigged as digi-double assets for a crowd system. “The crowd system we used was Atoms,” notes Guilfoyle. “We shot motion capture here in Iceland, with RVX staff performing the roles of US senators and congresspeople. Crowd TD Valdimar Baldvinsson handled all the House chamber crowd himself. Stavros Theiakos was the lead compositor on the sequence.” A different kind of visual effects approach was required for a moment when we go into the former President’s mind and he is rooting around in his failing memory, when books, desk items and artifacts around him flicker on and off in a timelapse fashion. “It was a very unique challenge on the show,” observes Gudmundsson. “There was a lot of creative interpretation in how to do that. This is purely in his mind; a visualization of his loss of memory.” “It was the one thing that wasn’t grounded in reality,” adds Guilfoyle. “Everything else was, ‘We know what we need to do for this. We need to replicate this real building or do this other thing.’ But this was the one thing that was like, ‘Okay, that will be interesting when that comes in and how that’s going to work?’” RVX received several plates that were at different stages in terms of being full, empty or partially full of props, as well as various lighting turned on and off.  Gudmundsson praises the work of Henrik Linnet, the lead compositor on this memory sequence, who brought all the plates together. “He really picked it up and ran with it. We trusted him to come up with something visually interesting. We picked one hero shot that was our testbed of what the look could be, what the flickers mean, what does it settle into? It was a real creative challenge.” The post Capitol locations: the visual effects of ‘Zero Day’ appeared first on befores & afters.
    0 Commenti 0 condivisioni 12 Views
  • BEFORESANDAFTERS.COM
    iClone 8.53: Advanced Sync For Professional Mocap Workflows
    Timecode Sync and Vicon Compatibility Now Available in iClone. Reallusion’s iClone 8.53 introduces groundbreaking timecode synchronization to streamline motion capture workflows. With the new Timecode Plugin, animators and mocap professionals can now easily sync data across multiple sources, including mocap, video, and audio, ensuring precise timing from pre-production to post. Overview of iClone 8.53 New Features: Synchronized Motion Capture Timecode is fully integrated into Motion LIVE (the iClone built-in motion capture feature), embedding sync data directly into live mocap recordings. Capture multiple devices (face, body, hand) and actors at once, with the option to choose your preferred timecode source for accurate, real-time synchronization. Multiple Device Capture Connect and sync various mocap devices — body suits, facial trackers, hand sensors — into one seamless performance pipeline. Multiple Actor Capture Record multiple performers simultaneously, with all data precisely aligned for clean playback and streamlined editing. Multiple Media Sources Integrate video, audio, and mocap — all synced to the same timecode for a unified production timeline. Director Quad View View live feeds from multiple perspectives — mocap data, virtual scene, camera angles, and talent — all in real time, all in sync. Directors can now view multiple perspectives in real-time, including mocap data, virtual scenes, camera angles, and talent, all in sync for better monitoring and decision-making. >> Check out the demo on the product page Powerful Timecode Editing The Timecode Plugin makes editing intuitive and effortless, starting with the ability to import FBX, video, audio, and iClone files with embedded timecode, making timecode visible directly on the timeline. Snap clips to embedded timecode with a single click, ensuring everything stays perfectly in sync. Finally, custom options and Unreal Live Link support allow for exporting files with timecode embedded, providing a seamless workflow from start to finish. Import Timecode Easily bring in timecode data from FBX, video, and audio, ensuring precise synchronization across all media sources. Switch to Timecode Mode Customize time units and match frame rates for accurate, project-wide synchronization. Auto Align to Timecode Instantly align animation files using embedded timecode or define a custom start frame for precise timing. Native Timecode Support in iClone Motion Formats iTalk (facial, audio, lip sync) iMotion (body) iMotionPlus (facial, body, expression, lip sync) iProject (characters, motions, lighting, camera, audio) Save & Export Timecode (Add timecode to Non-timecoded data) Maintain accurate sync while seamlessly integrating both custom and purchased motion assets across your production pipeline. >> Check out the demo on the product page Bidirectional Timecode Pipeline Transmit and adapt timecode in both directions for a streamlined workflow. Seamlessly integrate with MotionBuilder, Maya, and Unreal Engine for smooth editing, rendering, and real-time production, enhancing efficiency and collaboration within existing workflows. >> Check out the demo on the product page Coordinate with Burn-In Data Overlay timecode, take, date, camera, frame, and more — with multiple display and  layout options to suit any workflow. Display in the viewport or final render using fully customizable burn-in settings for clear, production-ready visuals. >> Check out the demo on the product page Previz Compositing Integrate mocap seamlessly into previz using iClone’s lighting and post-effects tools. The Timecode Plug-in enables smooth blending for a more immersive and production-ready preview. >> Check out the latest offer for the Timecode Plugin Perfect Combo For Professional Mocap: iClone and Vicon Partnering with Vicon, the new Vicon Profile in Motion LIVE allows seamless connection of Vicon mocap systems to iClone for real-time animation. It offers a unified, high-precision workflow for animating animation-ready characters from Character Creator. More importantly, Vicon can now capture both full-body and facial simultaneously through Motion LIVE. The Vicon integration is fully compatible with iClone’s Timecode Plugin, enabling accurate multi-device sync, time-aligned recording, and smooth integration into virtual production pipelines. With full compatibility, users can leverage both iClone and Shogun’s mocap editing capabilities, combining the strengths of both software for optimal results. >> Check out the latest offer for the Vicon Profile Other New Updates AI Search Premium Motions Deep Search is an AI-powered tool that simplifies finding digital assets in the Reallusion ecosystem using natural language queries, category filtering, and image uploads.  The new Motion Deep Search feature lets users search for ActorCore motion assets in the same intuitive way. Simply describe the motion, and the tool provides relevant results with an option to find similar motions, making it easier to locate the perfect animation for any project. New Lip-Sync Animation Workflow The latest update gives users the flexibility to refine lip-sync animations with AccuLips without being forced to do so during the initial audio import. This allows for a more streamlined workflow, letting users focus on other aspects before fine-tuning the lip-syncing later. Related Sources Learn more about Timecode Plugin features Learn more about Vicon Profile Timecode Tutorials iClone 8.53 release note Learn more about iClone 8 Online Manual FAQ Brought to you by Reallusion: This article is part of the befores & afters VFX Insight series. If you’d like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here. The post iClone 8.53: Advanced Sync For Professional Mocap Workflows appeared first on befores & afters.
    0 Commenti 0 condivisioni 32 Views
  • BEFORESANDAFTERS.COM
    Marvel releases ‘Assembled: The Making of Black Widow’ online
    The full featurette previously on Disney+ is now on YouTube, and includes behind the scenes of the shoot and VFX breakdowns. The post Marvel releases ‘Assembled: The Making of Black Widow’ online appeared first on befores & afters.
    0 Commenti 0 condivisioni 46 Views
  • BEFORESANDAFTERS.COM
    This ‘1923’ CG horse visual effects work by WeFX is…intoxicating
    Check it out in their video breakdown of the shot. I love how this was achieved. And find out more about WeFX here. Some notes from a WeFX release: The scene, depicting the harrowing demise of a horse and its rider during a high-speed chase across the American frontier, unfolds over roughly 20 shots and was brought to life by a 50-person team led by VFX Supervisor Ryan Ng. The sequence required a seamless blend of digital artistry and practical effects to capture a pivotal and emotionally charged moment in one of the season’s final episodes. The scene depicts Native American Pete Plenty Clouds being pursued by Marshal Kent and Father Renaud. In a desperate attempt to escape, Pete pushes his horse to its limits. As exhaustion sets in, the horse begins foaming and bleeding from the mouth before succumbing to its injuries and collapsing to the ground with Pete in a tangle of dust and limbs. This pivotal moment required seamless integration of practical and digital elements to achieve photorealism, making it a very complex sequence in the series. Given creator Taylor Sheridan’s deep expertise in horsemanship, the CG horse needed to be indistinguishable from the real one(s) used in the show. The WeFX team meticulously studied reference footage of horse racing accidents, carefully analyzing how horses fall and collapse mid-gallop. A full digital double was created for both the horse and its rider, Pete, with intricate details such as muscle movement, skin simulations, and dynamic hair and tail physics. Additionally, every part of the saddle was modeled and simulated to ensure accuracy. One of the biggest challenges was blending the CG elements with the practical environment. A special effects dummy horse was used on set, and the CG animation had to precisely match its movement and final resting position. A hero digital ground asset was also recreated, complete with matching grasses, foliage, and topography to seamlessly replace any traces of the set while ensuring the terrain appeared as untouched natural plains. Both Pete and the horse’s interactions with the ground were carefully matchmoved to ensure realistic interactivity and dust level continuity throughout. To bring the sequence to life, the team relied on industry-standard software—including ZBrush, Maya, Houdini, Nuke, and Arnold—to simulate everything from muscle dynamics to environmental integration. The result is a technically complex, photorealistic sequence that highlights the precision and innovation driving modern visual effects. Credits Ryan Ng vfx supervisor Filip Kicev cg supervisor Parichoy Choudhury cg supervisor Josh Clark tracking & matchmove supervisor Pankaj Brijlani matchmove supervisor Ehsan Ramezani compositing supervisor Scott Buda compositing supervisor Ethan Lee rigging supervisor Patrick Hirlehey vfx editor Vimal Mallireddy fx supervisor Steve Stransman executive producer Mohammad Ghorbankarimi executive vfx supervisor Amanda Lariviere executive producer & head of studio Igor Avdyushin head of cg Laurence Cymet head of technology Wes Heo head of 2d Brandon Terry head of editorial & i/o Jeremiah McWhirter editorial lead Maurizio Sestito senior vfx editor Austin Baerg layout supervisor Cesar Dacol Jr. creature supervisor Carlos Jacinto creature artist Brandon Golding creature artist Mauro D’Elia Matheus creature artist Amanda Heppner lead creature/lookdev artist Dajeong Park groomer/lookdev Eric Fernandez Garcia groomer/lookdev Arian van Zyl senior compositor Kenwei Lin senior compositor Keyur Patel senior compositor Prashant Goel senior compositor Lisa Jiang lead compositor Ashley Hakker compositor Christian Linker compositor Corey Allen compositor Kathryn Fay compositor Lurival Jones compositor Duc Nhan Nguyen junior lighting artist Fernando Gallo lead animator Mariia Nikiforova animator Tunyakarn Anucharchart animator Nadav Ehrlich animation supervisor Jay Kinsella associate animation supervisor Sonny Ong asset supervisor Bruna Hirosse Albino junior asset artist Jejin Lee asset artist Hyelee Park asset artist André Suk Hwan Ko cg generalist Darren Lesmana fx artist Mykyta Berezin fx artist Jesús Guijarro Piñal lead cfx Lewis Hawkes rigger Pankaj Brijlani matchmove supervisor Olabisi Famutimi senior lighting artist Rainy Chi Yuen Tsang lighting artist Viduttam Rajan Katkar lighting pipeline supervisor The post This ‘1923’ CG horse visual effects work by WeFX is…intoxicating appeared first on befores & afters.
    0 Commenti 0 condivisioni 54 Views
  • BEFORESANDAFTERS.COM
    Beyond Limits: How iRender Powers Octane Rendering in 2025
    Artificial intelligence is making a revolution in 3D. AI-powered tools have helped generate textures, enhance lighting, and even denoise previews, making artists work faster and easier. Everything seems perfect—until it’s time to render. The machine starts slowing down, the fan goes into overdrive, and the estimated render time stretches into hours, if not days. Sound familiar? Despite AI advancements, final rendering still demands raw GPU power, and that’s one thing AI cannot replace. While AI can optimize workflows and accelerate creative iterations, it doesn’t change the fundamental physics of rendering. Realistic path-tracing, complex reflections, and high-fidelity lighting calculations still require immense processing power. This is where render farms come in – not as an alternative, but as an essential part of the 3D world. For OctaneRender artists, a next-gen GPU render farm like iRender bridges the gap between limitless creativity and the harsh reality of rendering constraints. Rendering a C4D scene with Octane on iRender’s 8 x RTX 4090 server Why Render Farms Are More Important Than Ever? AI-driven tools have significantly improved rendering efficiency, but they still rely on powerful GPUs to function. While these tools speed up previews, they cannot replace final, high-resolution path-traced rendering. The reason is that even the most advanced AI features don’t calculate light bounces, volumetric effects, or physically accurate reflections, they simply approximate them. For true-to-life results, artists still need multi-GPU setups to complete their final renders. Though AI is supposed to speed up rendering, it often increases GPU workload. AI-driven simulations, deep-learning-based texture enhancements, and automated material generation all require more GPU power, not less. Even high-end GPU setups can be overwhelmed with high-poly environments, volumetric lighting, and complex shading. For freelancers and studios alike, deadlines are unforgiving. Animations and large projects can take days to complete on local computers, which slows down revisions and final deliveries. Instead of sinking money into expensive multi-GPU rigs, artists can tap into cloud-based rendering solutions that scale as needed. This is where iRender changes the game. How iRender Powers Octane Rendering in 2025 iRender is the next-gen render farm for OctaneRender in 2025. Unlike traditional render farms, iRender is an Infrastructure-as-a-Service (IaaS) platform, giving artists complete control over dedicated high-performance GPU servers. Instead of uploading jobs and waiting, you work in real time, just like on a local machine—except with significantly more power. Why Octane Artists Choose iRender Dedicated High-Performance Multi-GPU Servers OctaneRender is built for multi-GPU acceleration, and iRender provides the power artists need to maximize speed and quality. Users can select from single GPU to multiple GPU setups: 1/2/4/6/8 x RTX 4090 / RTX 3090 GPUs. No need to queue, you have real-time access to dedicated machines. Full Creative Control Unlike traditional render farms where you submit jobs and wait, iRender provides remote desktop access to high-end workstations. Install any version of OctaneRender and your preferred 3D software (Cinema 4D, Blender, Houdini, Maya, UE, etc.). Pause, tweak, and resume renders in real time – no need to start over. Work in a familiar, local-like environment, but with unmatched power. Cost-Efficient Scaling Rendering shouldn’t break the bank. iRender offers pay-as-you-go pricing, meaning you only pay for the hours you need. Also, there are long-term rental plans with cost savings for studios handling extensive projects. This flexibility enables artists and studios to scale instantly, eliminating the need for expensive in-house GPU investments. Reliable Performance & 24/7 Support The optimized hardware of iRender Farm ensures stable performance with minimal risk of crashes or failed renders. Especially, its 24/7 dedicated support team is ready to assist you at any time. What’s Next for iRender? Even in this AI booming era, rendering power remains a critical factor in producing high-quality 3D visuals. OctaneRender artists can no longer afford to rely solely on local machines. With iRender, you get full control, real-time access, and unmatched GPU performance – all without the hassle of hardware upgrades. The 3D industry is moving faster than ever, and iRender is at the forefront of innovation, continuously upgrading its infrastructure to support the latest technologies. One of its recent features is the Staking (Stake to Earn) program. Users can stake unused iRender Points to earn additional render credits. With NVIDIA’s next-gen GPUs launch, iRender will soon integrate RTX 5090 GPU into its render farm. Most anticipated, a new data center in South Korea is on the way in 2025. Sign up for iRender today and experience Octane rendering without limits! Brought to you by iRender: This article is part of the befores & afters VFX Insight series. If you’d like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here. The post Beyond Limits: How iRender Powers Octane Rendering in 2025 appeared first on befores & afters.
    0 Commenti 0 condivisioni 66 Views
  • BEFORESANDAFTERS.COM
    I mean, is there anything cooler that ILM’s John Knoll discussing how he built a motion control system?
    Here, he discusses it with Tested’s Adam Savage and how it was used for shooting the Onyx Cinder for Star Wars: Skeleton Crew. The post I mean, is there anything cooler that ILM’s John Knoll discussing how he built a motion control system? appeared first on befores & afters.
    0 Commenti 0 condivisioni 62 Views
  • BEFORESANDAFTERS.COM
    The new motion prediction from Wonder Dynamics, and what’s coming with facial animation
    A new podcast in the AI, ML and VFX series. Today on the befores & afters podcast, we’re chatting with Nikola Todorovic. He is the co-founder of Wonder Dynamics, which is now an Autodesk company. Wonder Dynamics really burst onto the scene a few years ago with its AI-powered toolset. One of the main things it can do is let you capture a scene with real performers in it with only a single camera, and then turn those people into CG characters using markerless mocap. That’s just one part of it, though, and there’s a match moving, tracking, clean plating and rendering side to what the toolset does. With Nikola, we’re taking a look back at the big developments in what Wonder Dynamics – now part of Autodesk Flow Studio – have been, including new aspects such as motion prediction, and what the team is hoping to do with facial performance. Listen in above, and check out some behind the scenes videos below. The post The new motion prediction from Wonder Dynamics, and what’s coming with facial animation appeared first on befores & afters.
    0 Commenti 0 condivisioni 59 Views
  • BEFORESANDAFTERS.COM
    On The Set Pic: ‘The Legend of Ochi’
    View this post on Instagram A post shared by A24 (@a24) The post On The Set Pic: ‘The Legend of Ochi’ appeared first on befores & afters.
    0 Commenti 0 condivisioni 85 Views
  • BEFORESANDAFTERS.COM
    Across The Rift
    The dramatic school bus fight in ‘In the Lost Lands’. An excerpt from befores & afters magazine. In The Lost Lands is full of many spectacular set pieces. Here, the team from Herne Hill break down the designs for The Rift, and also that tense moment where the characters must cross the cable car across the gorge. Mark O. Hammond (visual effects supervisor): The Rift is a brutal environment of a city that has been split in half by centuries of earthquakes. This environment was originally built in Unreal Engine for previs and used on set for the backgrounds. We reworked it afterwards in the traditional pipeline for some shots, but both assets are used in the final film. David Roby (CG supervisor): For The Rift’s design influences, in addition to the obvious urban destruction, Paul wanted to include some focal points of Art Deco architecture to help keep the geography legible. This primarily took shape in The Rift train station and the skyscraper that breaks the bus-cable-car’s fall. We also looked at a bunch of reference of geothermal vents to help get the feel right for the large emissions of noxious gases from the bottom of The Rift. In terms of construction, we used some hand layout for areas where we needed more precise, art-directed control such as near the seam-up with set or near action, as well as procedural workflows for volume. The linchpin of the procedural stuff was a really amazing city generator that CG supervisor Ben King got working for The Rift. You could paint, to camera, where you wanted roads, low-rise areas, etc. for a pleasing composition, and then cook out several point clouds which fed into the main environment. Mark O. Hammond (visual effects supervisor): For the cable-car scene, the live-action portion was filmed on a custom-built rotating platform for the bus. Mo-Sys rigged up a node that allowed us to track the speed of the rotation of the platform. This allowed us to solve for the camera’s rotation on set. Milla and the stunt performers handled the fight on top of the bus, which we then enhanced with digital doubles for more dangerous moves and to help with continuity. David Roby (CG supervisor): The biggest artistic challenge was to make sure that we felt the motion of the bus across the gorge during the fight. To make sure the bus felt like it was moving through space we made sure our gas plumes passed close enough to the bus to really feel the parallax as well as introducing a slight shift to the photographed elements in frame as if suspended from the cables above. Then there was the bus’ fall which we worked on to keep the speed and weight of both the bus and its cables feeling plausible within the movie’s vibe. From a technical side, the cable unwinding as it breaks was an interesting challenge. Allowing the sim to twist on itself in a way that still felt metallic was a delicate balance to find. In addition, the sequence had a fair number of relatively hero digidoubles for the monks. We had to add some to make sure the number they leave the station with feels plausible as they keep climbing out of the woodwork to fight. The post Across The Rift appeared first on befores & afters.
    0 Commenti 0 condivisioni 80 Views
  • BEFORESANDAFTERS.COM
    In season 2 of ‘Light & Magic’, Joe Johnston has an incredible way of bringing out some of the geekiest ILM details
    Including a new revelation about the Jar Jar practical suit and CG head test. Last week, befores & afters got access to Light & Magic season 2, with the three-part series on Industrial Light & Magic now streaming on Disney+. We also took part in a virtual press conference with season 2 director Joe Johnston and a host of interviewees featured this season: Janet Lewin, John Knoll, Doug Chiang, Rob Coleman and Ahmed Best. Season 2 is a wonderfully self-contained follow-up to Lawrence Kasdan’s season 1 of Light & Magic, and this time it focuses very strongly on the rise of digital in ILM’s history. That includes George Lucas’ impressive push to capture the prequels with high resolution digital cameras, and project them with digital projectors in cinemas (a task he successfully managed by the time of Attack of the Clones (2002), which required extensive consultation with camera and lens companies, projector manufacturers, and cinemas). It truly changed the way movies were made, and seen.  (L-R): George Lucas, Doug Chiang, and John Knoll in a scene from Lucasfilm’s LIGHT & MAGIC, Season 2, exclusively on Disney+. © 2025 Lucasfilm Ltd. & . All Rights Reserved. What this new season also highlights is the extent of ILM’s breakthroughs in digital visual effects that took place in the 90s and 2000s, as the industry transitioned from analog and optical to a new digital world. ILM was at the forefront of this transition, as is highlighted in Light & Magic season 2. What works so well in the series is how Johnston highlights a few key events in that period of ILM’s history. These include effects simulations in Twister (1996) and The Perfect Storm (2000), which are expertly broken down by Stefen Fangmeier and Habib Zargarpour on camera this season. And then, there’s also Jar Jar Binks. You can’t go past how groundbreaking having a fully digital main character on screen in 1999 in The Phantom Menace was. It simply had not occurred to this level previously. Jar Jar would be portrayed by Ahmed Best, who performed the role on set in a partial costume with a special Jar Jar ‘hat’ head-piece, and then also later on ILM’s motion capture volume in an optical motion capture suit. The final character was fully CG. Jar Jar Binks. Previously, in official Star Wars documentaries and in pieces on befores & afters, there’s always been some fun stories told about a test orchestrated by John Knoll (visual effects supervisor on The Phantom Menace) and Rob Coleman (animation supervisor) for Jar Jar. An initial idea was that the character could be performed on set in a suit, and the head done digitally, in the hope that this might save some money from doing it fully digitally. The test was aimed at crafting a side-by-side of Jar Jar—one where the head only would be digital, and one where the character was fully CG. In stories told so far, the conclusion was that matchmoving the CG head onto a live-action body was a complicated and time-consuming task, whereas a fully digital character was somewhat easier—and more inexpensive.  Well, without too many spoilers, a fun moment in season 2 of Light & Magic sees Dennis Muren suggest that maybe the test was a little…rigged (as in, weighted towards the fully CG character being the best outcome). John Knoll is presented with that piece of information on camera, and, well, the series is worth watching just for this little VFX-nerd moment alone. It’s just so fun to have these kinds of tidbits about groundbreaking visual effects history in the documentary. There’s other fantastic pieces of information about the Star Wars prequels–new revelations about the podrace in The Phantom Menace and the development of digital Yoda for Attack of the Clones comes to mind–and other films that I had never heard before in Season 2. It’s an absolute gem of a series.  From the press conference for ‘Light & Magic’ season 2: Joe Johnston, Ahmed Best, Janet Lewin, Doug Chiang, John Knoll, Rob Coleman and host Brandon Davis. Watch the trailer for Light & Magic season 2 below: The post In season 2 of ‘Light & Magic’, Joe Johnston has an incredible way of bringing out some of the geekiest ILM details appeared first on befores & afters.
    0 Commenti 0 condivisioni 74 Views
  • BEFORESANDAFTERS.COM
    Getting your VFX head around ACES 2.0
    ILM’s Alex Fry is here to help, and explain how it was used on ‘Transformers One’. At the recent SIGGRAPH Asia 2024 conference in Tokyo, Industrial Light & Magic senior color and imaging engineer Alex Fry gave a fascinating talk about his role in the development of ACES 2.0. ACES—the Academy Color Encoding System, from the Academy of Motion Picture Arts and Sciences—is an industry standard for managing color throughout the life cycle of theatrical motion picture, television, video game, and immersive storytelling projects. The most substantial change in ACES 2.0 is related to a complete redesign of the rendering transform. Here, as the technical documentation on ACES notes, “Different deliverable outputs [now] ‘match’ better and making outputs to display setups other than the provided presets is intended to be user-driven. The rendering transforms are less likely to produce undesirable artifacts ‘out of the box’, which means less time can be spent fixing problematic images and more time making pictures look the way you want.” It’s perhaps also worth pointing out here what the key design goals of ACES 2.0 were (again, from the technical documentation): Improve consistency of tone scale and provide an easy to use parameter to allow for outputs between preset dynamic ranges Minimize hue skews across exposure range in a region of same hue Unify for structural consistency across transform type Easy to use parameters to create outputs other than the presets Robust gamut mapping to improve harsh clipping artifacts Fill extents of output code value cube (where appropriate and expected) Invertible – not necessarily reversible, but Output > ACES > Output round-trip should be possible Accomplish all of the above while maintaining an acceptable “out-of-the box” rendering At SIGGRAPH Asia, Fry described what it took amongst the ACES leadership and technical advisory groups to get to these new developments with ACES 2.0, in particular, in reducing visual artifacts and ensuring consistency across SDR and HDR displays. Fry also discussed first hand how ACES 2.0 was implemented on ILM’s Transformers One (on which Fry was compositing supervisor). befores & afters got to sit down at the conference with Fry for a one-on-one discussion of ACES 2.0 and Transformers One. Alex Fry at SIGGRAPH Asia 2024. b&a: You’re a member of the Technical Advisory Council for ACES. How did you first become intertwined with ACES?  Alex Fry: I had been generally interested in color as a Comp’er back when I was at Rising Sun Pictures. [Visual effects supervisor] Tim Crosbie taught me the basics and really helped me with a few leaps of understanding that I probably wouldn’t have arrived at by myself. I stayed interested in it, playing around with film LUTs and grading. DI and comp are very closely related, they’re jobs that are more similar than they are different.  Then at Animal Logic, I became one of the few people there who was consistently interested in that sort of thing, and had opinions about it. When ACES popped up as something you’d hear about—during The Great Gatsby era—I started to learn a bit more about it, the concepts just resonated with me. I thought, yes, this is obviously where we should be going. ACES standardizes a bunch of things, both technically, conceptually, and the language around those things. At every different company I’ve been at, we’re all kind of doing the same thing, but we would call it slightly different things, the terminology would all be 10% different. So, if you’re having the same conversation with someone from another company, you might be mostly talking about the same thing, but you wouldn’t quite know 100%, unless you were incredibly explicit about it.  I was the first Comper, well, the only Comper for a long time, on The LEGO Movie, as it was in early development, and it seemed like ACES had some things that would help us out. Before then I’d been on The Legend of the Guardians and there were certain things to do with the way the display transforms worked that were a little limiting. ACES looked like it answered a few of those questions. I reached out to Alex Forsythe at The Academy and had a couple of chats about it, and we made one or two little additions to make it work with the pipeline that we already had, and got those pieces prototyped so I could pitch it to the production, who bought in completely. The production was a great success. Then, off the back of that, I did a run of talks at the Academy, NAB and SIGGRAPH. The Lego Movie was one of the first mainstream studio productions that used ACES, so there was a lot of interest in exactly how we used it, and what it gave us.  In the years after that, I was fairly active in a few of the ACES forums, making the case for people using it, and also just building little tools that made it a little easier to live with in reality of production rather than the idealized version of production. Eventually, when the ACES 2.0 project came around, I was asked to co-chair the Output Transforms working group with Kevin Wheatley from Framestore.  Transformers One. b&a: As an overall thing, what does ACES 2.0 do better than ACES 1.0? Alex Fry: The two main things that are better are, better visual and perceptual matches between the SDR and HDR renderings of the transforms, and better behavior for extreme colors at the edge of gamut, or extreme colors that are heavily overexposed. Both of those areas are much improved. That need has really come out of people’s actual experiences working with the system and how it behaves with certain difficult images.  In terms of the original SDR/HDR renderings, HDR displays just weren’t functionally a thing in the real world when ACES was first developed. HDR displays were a thing that were coming, and it was an area that the system attempted to address, but for the original developers, you didn’t have an actual HDR display on your desk 15 years ago. They were only at the Emerging Technologies booth at SIGGRAPH and places like that. They were pretty rare. They weren’t something that people were actively trying to grade movies through regularly. The Dolby PRM monitor came out around 2010, but unless you were actively grading and finishing DI content on one, you just didn’t have daily exposure.  It’s a different world now. Everyone’s phone is HDR. Most people’s TVs have HDR, even if they didn’t actively choose it. You don’t have to seek it out. So it was time to pursue better consistency of SDR and HDR in ACES 2.0.  b&a: How did the working group try and reach these new standards? Alex Fry: A lot of tinkering. The thing is, a lot of the design requirements are somewhat contradictory and somewhat in conflict with, or at least in tension with, each other. Certain things in the requirements list we had heavily conflicted with the other ones, and you’ve got to kind of work out the right level of compromise. Some of them are just very nebulous, like it being ‘attractive’ and it being ‘neutral’. Those are very vague things. ‘Dealing well with out-of-gamut colors’ is very much in conflict with the ‘invertibility’ question.  Once you get into the nitty-gritty of it, there are parts of it that are science, but they’re a little bit vague as far as science goes. We’re trying to use mathematical models to approximate certain properties of the human visual system. Those things are a little nebulous at best. They’re variable between people, and not everyone agrees on certain things happening in reality. It’s very hard to even kind of settle those arguments because people’s own perceptions of what they see or how they describe what they see are tricky.  Also, some of these things are vague cultural conventions, like the issue we constantly have with hue skews. Fire, for example, comes up a lot as a tricky one to do. It’s a visual artifact that we have in the ACES 1.0 rendering, but a lot of people think it looks good for fire. You can point back to historical mediums like painting and say, ‘Yes, prior to cinema, prior to anyone taking a photograph, people would still often paint in the middle of a fire as being more yellow.’ It becomes very murky when you’ve got issues like that mixed in with color appearance models, which work under some circumstances but not these other circumstances. Transformers One. b&a: How did ACES 2.0 impact development on Transformers One? Alex Fry: The direct benefit was a better match between the HDR and SDR versions, and having the final film match everyone’s creative intent. Despite the proliferation of HDR screens in the consumer space, we can’t always work in HDR in a professional context. Linux is our primary desktop platform that we do all of our compositing, lighting and all texture painting on, and the OS has no meaningful support for HDR. There are various attempts to build HDR support into the window managers that are available on Linux, but it perpetually feels like it’s five years in the future, and it’s felt like that for quite a while.  So, for now, with the platform that we’re working under, it’s just not a reality on the desktop. You add to that the fact that post-COVID, a large percentage of the industry is using some flavor of PCoIP, and those almost universally don’t support HDR. That’s not to say we never look at the work in HDR. We do regular HDR reviews internally, but most of the people who are sitting down making creative decisions are doing it in SDR. Most of the reviews happen in SDR, whether it’s via something like SyncSketch or QuickTime review movies. If you’re in a theater, it’s SDR. Most of the creative decisions get made in SDR, right up until the point of the DI session.  Now, I can work directly in HDR but that’s because I’m more involved in the color side of it, and can make it work that way (Local MacOS). But, at scale, it’s all happening in SDR. The upside of a more coherent match between SDR and HDR is that creative calls that you make in SDR carry through to the HDR version. And, remember, the HDR version is the definitive version of the film. The SDR version—even the theatrical SDR version—is a derivative of the HDR version. It’s not like the early days of HDR where, really, the definitive version of film was SDR and maybe you got a HDR version. These days, the HDR version is the version and then everything else shakes out of that one.  What this all meant on Transformers One was that we’d have production designer Jason Scheier doing concept art and paintings that are in SDR. They drive the look of the film. So the way those look has to translate well into our SDR comps, and then those creative decisions need to transition and hold true into the HDR domain.  A simple example of this is eyes and dynamic range dependent hue skews. D-16, who later becomes Megatron, his eyes tell the story of his change over time. At the start of the film, his eyes are yellow, which is both a reference to the Marvel Comics version of Megatron back in the day, and  a visual separator between his more innocent stage, when he’s D-16, to when he transitions and turns into Megatron, and his eyes become red. With the original ACES 1.0 Display Transforms, reds skew towards yellow as you increase exposure, and do so at different rates between the SDR and HDR renderings. Not only do we want Megatron’s eyes to actually be red, regardless of exposure level, we want that red to be the same red between HDR and SDR. This would have been very difficult to manage in the original ACES transforms. Obviously, Megatron’s eyes shifting from red to yellow as they get brighter isn’t just a visual artifact, it’s a story problem. One thing to note is that Transformers One didn’t use the final version of ACES 2.0, as it was well into production long before we finished the algorithm. It used an earlier development version. At a certain point, we just had to go, ‘We’re locking on this one’, specifically version 28, if anyone’s interested. Transformers One. b&a: Where can people see the full HDR release of Transformers One? Alex Fry: Streaming is by far the easiest way, the HDR version is available on all the streaming platforms where the film is available. If you have a modern TV (Ideally an OLED) it should be good to go, or any iPhone post iPhone X. The film did get a HDR release in Dolby Vision in the US. It also got mastered for the Barco HDR system, which uses light steering tech. I think there were other places around the world that have emissive LED screens that showed the HDR master. None of those are going to have quite the same dynamic range as the home HDR version, though.  Internally, we were viewing it on a Sony A95L, which uses a Samsung QD OLED panel. It’s super-bright and super-punchy, with a massive color gamut. That kind of screen would be the best way to see it in HDR, basically a really good home OLED, as big as you can get. That said, nothing replaces the full theatrical experience. When push comes to shove, I’d still go for big and loud in a theatre, over small and bright at home.  b&a: Finally, for someone reading this who might be a compositor, how important do you think is understanding ACES in their role today? Alex Fry: I’d say it’s pretty important to understand the abstraction between the pixels leaving the display, and the pixels you’re actually manipulating, whether it’s a full ACES pipeline that uses the ACES display transforms, or something else, that abstraction is key.  At big companies, we try and automate things to the point where you might not really know what’s going on, but if you get a proper grip on what’s happening under the hood, life is just easier. It’s kind of the same way that understanding how pixels get from many different cameras into your comp can help. That can all be automated away, but it’s better if you actually understand what’s happening there. A Compositor’s job is to understand the whole stack and make it work, that’s certainly what I’ve always tried to bring to the job.  You can find out more about ACES at https://acescentral.com/. The post Getting your VFX head around ACES 2.0 appeared first on befores & afters.
    0 Commenti 0 condivisioni 90 Views
  • BEFORESANDAFTERS.COM
    Effects sims by Wētā FX in ‘Godzilla X Kong: The New Empire’
    A new VFX breakdown is out. The post Effects sims by Wētā FX in ‘Godzilla X Kong: The New Empire’ appeared first on befores & afters.
    0 Commenti 0 condivisioni 112 Views
  • BEFORESANDAFTERS.COM
    The dream of AI roto with Electric Sheep
    Today on the befores & afters podcast, as part of our AI, Ml and VFX season of episodes, we’re chatting to the team from Electric Sheep. I’m joined on the podcast by two of the co-founders, Gary Palmer and Richie Murray. Electric Sheep has released an AI rotoscoping product. What I talk to Gary and Richie about is how they got there – the origins of their tool, what things have been particularly challenging to solve with AI roto, what things still need solving, and how they’ve made a business out of it. They go into how the Electric Sheep process works, and talk a little about what they’re trying to do with in-painting and matchmove with machine learning, as well. Listen in above, and, below, check out some animated gifs showcasing their process. The post The dream of AI roto with Electric Sheep appeared first on befores & afters.
    0 Commenti 0 condivisioni 123 Views
  • BEFORESANDAFTERS.COM
    Watch Storm Studios’ VFX breakdown for ‘The Electric State’
    Includes how the studio created an authentic VHS look for some shots by recording comps onto old VHS tapes. The post Watch Storm Studios’ VFX breakdown for ‘The Electric State’ appeared first on befores & afters.
    0 Commenti 0 condivisioni 143 Views
  • BEFORESANDAFTERS.COM
    Framestore runs through the visualization of ‘Wicked’ in this hour-long webinar
    Previsualisation supervisor Chris McDonald discusses all aspects of previs, techvis and postvis, including for Defying Gravity. The post Framestore runs through the visualization of ‘Wicked’ in this hour-long webinar appeared first on befores & afters.
    0 Commenti 0 condivisioni 149 Views
  • BEFORESANDAFTERS.COM
    Issue #30 of befores & afters mag is a full deep-dive on ‘In the Lost Lands’
    Get the magazine in PRINT or DIGITAL. Issue #30 of befores & afters magazine is now out, and goes behind the scenes of Paul W. S. Anderson’s In the Lost Lands, starring Milla Jovovich and Dave Bautista. The issue breaks down the film’s visual effects by Herne Hill, which took plates mostly shot on bluescreen and crafted significant environments and creatures for the post-apocalyptic setting. There’s also coverage of the train crash moment in the film, with VFX by WeFX. You can grab the issue in PRINT from Amazon (that’s the US store, make sure you try your local Amazon store, too), or as a DIGITAL EDITION right here on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. Hope you enjoy the latest issue! The post Issue #30 of befores & afters mag is a full deep-dive on ‘In the Lost Lands’ appeared first on befores & afters.
    0 Commenti 0 condivisioni 132 Views
  • BEFORESANDAFTERS.COM
    Digital Domain’s VFX breakdown for ‘The Electric State’ is here
    See how DD brought so many robots to life. The post Digital Domain’s VFX breakdown for ‘The Electric State’ is here appeared first on befores & afters.
    0 Commenti 0 condivisioni 126 Views
  • BEFORESANDAFTERS.COM
    I think this is the first VFX breakdown of Framestore’s work I’ve seen from ‘Edge of Tomorrow’
    It also shows a digi-double of Tom Cruise for a hand-off shot to a practical stunt. The video is an Academy original, featuring VFX supe Nick Davis. The post I think this is the first VFX breakdown of Framestore’s work I’ve seen from ‘Edge of Tomorrow’ appeared first on befores & afters.
    0 Commenti 0 condivisioni 121 Views
  • BEFORESANDAFTERS.COM
    Some fun bts in this ‘Captain America: Brave New World’ blooper reel
    Includes the bluescreen shoot, stunts, mocap and Red Hulk reference. The post Some fun bts in this ‘Captain America: Brave New World’ blooper reel appeared first on befores & afters.
    0 Commenti 0 condivisioni 160 Views
  • BEFORESANDAFTERS.COM
    See this ‘Gangs Of London’ s3 VFX breakdown from Flaming Frames
    Watch the frenetic reel showcasing bullet hits and blood and gore. Warning: blood and gore. The post See this ‘Gangs Of London’ s3 VFX breakdown from Flaming Frames appeared first on befores & afters.
    0 Commenti 0 condivisioni 123 Views
  • BEFORESANDAFTERS.COM
    Jack Black (and Jason Momoa) on greenscreen for ‘A Minecraft Movie’
    From Jack Black’s Instagram feed. View this post on Instagram A post shared by Jack Black (@jackblack) The post Jack Black (and Jason Momoa) on greenscreen for ‘A Minecraft Movie’ appeared first on befores & afters.
    0 Commenti 0 condivisioni 146 Views
  • BEFORESANDAFTERS.COM
    NFTS students built a miniature replica of the Tantive IV corridor
    A miniature ship, a detailed droid maquette, digital set extensions and a fully-CG droid all inspired by ‘Rogue One’ were also crafted. See it all in this visual breakdown. Here at befores & afters, we’ve regularly taking exclusive looks at the UK’s National Film and Television School (NFTS) Model Making and Visual Effects projects involving miniatures, motion control and digital visual effects. Most recently, NFTS embarked on a project inspired by Rogue One: A Star Wars Story, and befores & afters has the exclusive breakdown of the build process. Students constructed a miniature replica of the Tantive IV spaceship, a section of its corridor interior and a detailed droid maquette. “Every year our students raise the bar on what is possible within this module,” says John Lee, NFTS Head of Model Making. “It’s a joy to create works inspired by such iconic films. It really engages the students and allows me to develop the course with some real-world experience, connecting students and education with industry. I have to say, this year, we’ve really smashed it!” Lee himself worked in the prop-making and set decoration departments for Rogue One, as well as other Star Wars. This latest miniature build was inspired by Lee’s time working alongside Head of Prop Making Mark Rocca and overall Prop Master Jamie Wilkinson on Rogue One. Meanwhile, Head of CG at NFTS Jonathan McFall led the VFX students on their task of building digital set extensions and a fully animated CG character that was composited into the miniature plates. The process also included a motion control shoot orchestrated by Ember Films. Here’s the final film, and a visual breakdown of the process follows. Behind the build The model set and miniature spaceship takes around 6 weeks to build with a team of 12 model makers. Starts with a simple hand drawn storyboard of the sequence they intend to film. Model Makers made a 1:5 scale white card maquette to plan proportions. The corridor set is full of small details and lots or repeat shapes, introducing more digital content to what the students are learning such as Rhino and 3D printing, feeding into current industry requirements. Students from Model Making and VFX examine the 1:5 scale white card maquette with John Lee and VFX tutor Jonathan McFall. The 2024/24 model making cohort on day one of the NFTS Model Making course at the National Film and Television School. Model making student operating the bandsaw to cut the overside bulkhead shapes before routing. Work continues The model was built at 1:5 scale and measured approx. 2.5m x 1.5m x 600mm. The set comprises matching bulkheads, doors and wall panels. The students also created a new section of corridor in the form of an escape hatch to offer them a chance to develop their creative design skills. The set has a pristine white gloss finish, which is very hard to pull off in miniature. Model making student operating the router to make repeatable bulkhead shapes for the Tantive IV corridor set. Model making student on the router, using a profile template 3D printed test for vac form pattern. Useful to establish accurate shape which we can easily amend dimensions based on preferred vac form material thickness. Vac form shapes coming together for the corridor wall details. Note, some are positive shapes, some negative, but all have to fit the same template. Success, thumbs up from the model makers during vac forming. Combining practical and digital VFX students recreated the corridor in Maya and made additional digital assets including ceiling replacements, interactive lighting, and moving hatch doors and a fully animated CGI droid – named N4T5 (NFTS)! “Both departments are working with the same space, one practical, and the other digital. As I teach this, it’s incredible to witness how both departments are problem solving the same issues.” – John Lee Model Makers are working in teams to break down the build into key parts, each learning how to use some of the larger tools and machines in the workshop to a high standard but more importantly – learning how to effectively communicate and collaborate. Bulkhead shapes coming together on the full size 1:5 scale drawing. Workshop assembly line of model makers fabrication MDF bulkheads, including spares. Both model makers and VFX students in the VFX base looking at the pre-visualisation. Model making student working on digital concept design for escape hatch, not seen in a Star Wars film. Model maker working on accurate patterns for the doors, aligning with the drawing at all times. Made from a Rhino drawing, 3D print, clean up, 2K primer and fine sanding before making silicone mould. Laser cutting the fine details on wall mounted boxes using a jig for the laser to ensure accurate positioning of all vac forms, of which there were around 50. Model makers align walls and bulkheads on the accurately marked out floor, just like how the full size set on ‘Rogue One’ was made back in 2015. Testing practical LED lighting in workshop. Model makers check that everything is level and square. There is nowhere to hide on a set build like this. Starting to look a lot like the Tantive IV set now – Model maker working on electrics and the wall curves start to be dry fitted. These were made as 3D prints, moulded and cast in resin and fillite (bulking compound). Fitting of laser cut wall frames. Model maker testing practical LED lights in the escape hatch. Other elements in Grey Primer ready for paint. Painting of the escape hatch underway, using references created by the NFTS model makers in the form of digital concept art. White box fine details coming together. Every panel in exactly the right place required lots of research and development during the week-long white card build. Model maker putting final touches to the escape hatch on set. Final set up of the 1:5 scale corridor set on Stage 4 at the NFTS. Grey maquette of the N4T5 droid, used for lighting references in post-production. This began as a sketch on paper, then went digital in order to create the animation, then back to maquette for the onset lighting references. Model maker setting up a second droid for portfolio shots. Note the green screen backdrop so that the VFX students can drop in a digital set extension once the door opens, revealing a much larger space. The filming stage After the build, the model is carefully moved onto one of the NFTS filming stages. This year they were reunited with UK production company and moco specialists Ember Films. NFTS VFX students take care of all the technical requirements and make sure they capture all the information needed to complete work on the film. Worked with an NFTS Cinematography student to take care of on-set lighting, and learn the skills involved with shooting with a Motion Control rig. Ember Films came on board and supplied on set motion control camera team and equipment. Ember Films rig with NFTS’ own Alexa mini enters the escape hatch. Careful advance planning ensures the spaces are big enough for the camera to really integrate with the set. John Lee notes that it is “nice to be able to build large at 1:5 scale.” Motion control camera capturing data of the stand in droid. Note: ceiling missing for the on set shooting to accommodate the over the set moco arm. Scene completed with a digital ceiling, as well as digital doors opening. Tracker markers on set for one of the moco passes. Note the wall panel detail on show. Model maker about to remove the practical door ready for the camera to advance into the escape hatch. As it looked on the monitor as the sequence was shot. Lee notes: “Careful alignment with the previs made this sequence straightforward to direct, thanks to great prep by the NFTS VFX department and Ember Films very fluid moco system.” VFX student with clapperboard to mark each pass. Model maker and VFX student working together setting up the droid mock up for lighting pass. Model makers setting up the Tantive IV spaceship miniature in front of green screen. Close up Tantive IV miniature on set. “Due to time restraints,” says Lee, “we were unable to make a 6’ long version, so I had to be clever in the framing to make sure we saw just enough detail in the shot to feel believable and also capture the grandeur of a Star Wars miniature shot. I think we pulled it off!” The whole team. Model making and VFX students, John Lee, Jonathan McFall, Ember team and on set AD’s and PM students. The idea of specifying what and how the miniature set is shot, then prototyping dramatic intent is all achieved through previsualisation. Texture and look dev specialist Mikey, building up the surfacing work for the N4T5 droid, using Substance Painter. Lighting specialist Junze takes a very detail orientated approach to the CG lighting of the CG set extension of the miniature set. Lee: “Tweaking and re-optimising each render, as he goes – he celebrates shaving his render time down with a ring of our ‘render bell’.” Nuke compositing specialist John adds reflections to the screen of the N4T5 droid using an image based lighting lat/long image, which has come out of the HDRI stitching process from PTGui software. VFX breakdown How to apply Applications are now open for Model Makers to join the NFTS course. You can apply before 24 April to start in September 2025. nfts.co.uk/modelmaking The post NFTS students built a miniature replica of the Tantive IV corridor appeared first on befores & afters.
    0 Commenti 0 condivisioni 131 Views
  • BEFORESANDAFTERS.COM
    Here’s Noid’s VFX breakdown for ‘The Substance’
    Including how the Blob and Monstro digital visual effects were achieved. The post Here’s Noid’s VFX breakdown for ‘The Substance’ appeared first on befores & afters.
    0 Commenti 0 condivisioni 143 Views
  • BEFORESANDAFTERS.COM
    How Rising Sun Pictures REVIZE machine learning tech came to be
    Rising Sun Pictures President Jennie Zeiher on her studios machine learning toolset called REVIZE, used on many projects for face and body replacement.Here at befores & afters were continuing our season of episodes on AI and ML in VFX. Today were chatting with Jennie Zeiher, who is the President of Rising Sun Pictures. Were going to take a look at what is more of a business and marketing exploration of Rising Sun Pictures REVIZE toolset. This is a machine learning toolset the VFX studio has now used on many projects for face replacement and now even body replacement.Ian Failes and RSP President Jennie Zeiher.Some of the projects it has been used on include Furiosa, for young Furiosa which combined the features of the younger actor Alyla Browne with those of the adult Furiosa, portrayed by Anya Taylor-Joy. The VFX studio also utilized REVIZE for Sonic the Hedgehog 3 for the laser dance seen featuring Jim Carrey, who played two characters in that scene.Listen in above.The post How Rising Sun Pictures REVIZE machine learning tech came to be appeared first on befores & afters.
    0 Commenti 0 condivisioni 159 Views
  • BEFORESANDAFTERS.COM
    On The Set Pic: Deadpool & Wolverine
    This new pic comes from an upcoming book from Marvel Studios showcasing images from the set.The post On The Set Pic: Deadpool & Wolverine appeared first on befores & afters.
    0 Commenti 0 condivisioni 193 Views
  • BEFORESANDAFTERS.COM
    Behind the titles for Severance s2
    Oliver Latta from extraweg.studio has posted this breakdown. You can also see lots of behind the scenes at Behance.The post Behind the titles for Severance s2 appeared first on befores & afters.
    0 Commenti 0 condivisioni 163 Views
  • BEFORESANDAFTERS.COM
    Watch CGEVs VFX breakdown for The Substance
    A whole range of invisible effects work, make-up effects enhancements, and more. Note: this breakdown contains nudity.The post Watch CGEVs VFX breakdown for The Substance appeared first on befores & afters.
    0 Commenti 0 condivisioni 169 Views
  • BEFORESANDAFTERS.COM
    On The Set Pic: Avatar: Fire and Ash
    Go behind the scenes.(L-R) Stephen Lang and Director James Cameron on the set of 20th Century Studios AVATAR: FIRE AND ASH. Photo by Mark Fellman. 2024 20th Century Studios. All Rights Reserved.The post On The Set Pic: Avatar: Fire and Ash appeared first on befores & afters.
    0 Commenti 0 condivisioni 182 Views
  • BEFORESANDAFTERS.COM
    Behind the scenes of that crazy plane fight, plane crash and parachute jump in Back in Action
    Im going to be merciless when it comes to the CG parachute.Seth Gordons Back in Action contains a dramatic aerial sequence featuring a plane crash in the mountains, an avalanche and a thrilling parachute jumpall within a matter of minutes. It occurs as CIA agents Matt (Jamie Foxx) and Emily (Cameron Diaz) are ambushed on the plane for a key device they hold, and must fight off their attackers and then escape the crashing aircraft.Interestingly, the initial version of the sequence was to take place on a train. Locations were scouted in London and previs was produced for that version of the scene. However, when another Netflix film also featured a train action moment, it was decided to switch to the plane approach.We previsd the plane shots with MPCs visualization team, outlines production visual effects supervisor Erik Nash. We didnt previs all the interior action because that was done as stuntvis by the stunt team. The previs and stuntvis then helped us work out how to shoot everything.Back In Action. Behind the scenes on the set of Back In Action. Cr. John Wilson/Netflix 2024.Nash narrowed in, at first, on the parachute jump side of the sequence, since the visual effects supervisor is himself a trained parachutist (he had previously lent his expertise in that area at Digital Domain on the skydiving scene in Iron Man 3). I got to talking to second unit director J.J. Perry, who is former airborne military, and he said, You know what? You do this. This is right up your alley. You worry about the parachute part of it.Looking to film as much of that parachute section practically, Nash worked out that the unusual orientation of the jump could be donein the film, it is effectively a tandem jump where Matt and Emily are facing each other holding on, rather than one behind the other. Having seen it done a few times and knowing you can do it face-to-face, I know we could stage it so they would survive, relates Nash.The next step was to establish where to film a jump. It needed to be a snowy Alps-type environment. Production secured permissions to film in Slovenia which featured the desired mountainous location. We found a small town called Bovec that is a ski resort but also has a grass airstrip where they do skydiving from in summer, says Nash. There was also this grass landing area which was a meadow with this very steep granite mountain face that had snow all over it. It was perfect.Back In Action. BTS (L to R) Jamie Foxx as Matt and Cameron Diaz as Emily on the set of Back In Action. Cr. John Wilson/Netflix 2024.Parachutist Dave Emerson cast two stunt performers for the jump, Yolanda Lee and Christian Botakwame. Nash was then part of the helicopter shoot over Bovec to film the jump. We had a Shotover helicopter camera rig with a long lens on it. We had an even longer lens on a ground-based camera, and we had a second parachutist with a helmet mounted-camera who jumped with our stunt players. And then we had a drone camera, too. We did two jumps in one afternoon and got tons of amazing footage, and it worked better than I ever could have dreamed. No visual effects were applied to any of that footage other than the first shot where we tie in the avalanche and the explosion from where the jet cratered in. Ironically, I get a kick out of doing stuff that doesnt involve visual effects every now and then.Close-ups of Foxx and Diazs characters did involve two bluescreen inserts. I thank Seth in my head for not playing a bunch of close-ups, because theyre often really hard to do, shares Nash. I made sure that we could shoot these outside. I did not want to fake daylight on a sound stage.Similarly, Nash was adamant that the practical parachute seen in these close-ups also not appear fake. Ive seen a lot of these types of bluescreen parachuting shots just not work out. So, I had pitched an idea of doing it rigged off a flatbed truck so that theyre actually moving through space, not hanging static in place. However, it was logistically and budgetarily prohibitive. We ended up doing it the old-fashioned way by hitting them with a big rush of air. There were only two of those shots, and I thought they turned out pretty well.Part of the crash, from the films trailer.Thankfully theyre short shots, continues Nash. Its really tough to fake all of the complex dynamics of that kind of shot in terms of, okay, what is the camera platform thats photographing these? So, we have to imagine that theyre traveling through space at 30 miles an hour. If the cameras in any proximity, then really youre implying that its another parachutist that shot it. I think sometimes when these shots fail, its because the camera is often doing something it cant do. But to really shoot these close-ups theres not a lot you can do if you really had to shoot them for real.The actual deployment of the parachute was achieved digitally. I had a heart-to-heart with Malte Sarnes, MPCs visual effects supervisor for this sequence, and I told him upfront, We have to do a CG parachute for the deployment and for the landing, and it has to be great. This was because we couldnt shoot these parts with a real parachute. I said, Im going to be merciless when it comes to the CG parachute. Im going to be a hard ass. Ive seen so many unconvincing to downright bad CG parachutes over the years, and it just bugs me to no end.I think thats one of the places where CG parachutes often break down is that theres not enough transmitted light, Nash adds. Its all reflected. If you hold the fabric up to a bright sky, you can almost see through it, and theres multiple layers. So youve got the upper layer affecting the light, hitting the lower layer, but theyre both semi-translucent.Back In Action. (L to R) Cameron Diaz as Emily and Jamie Foxx as Matt in Back In Action. Cr. Courtesy of Netflix 2024.Nash arranged for the props department to send the practical parachute to the assets team at MPC in London for them to examine it. I said, Take it outside, have all your asset guys feel the fabric, hold the fabric up to the light, see what its like. We also had about 20 minutes of actual parachute jump footage from flying around in that valley. I said, Here, this is all the reference you could ever ask for.Whats more, Nash arranged for free-fall camera operator Andy Ford to deliver some parachute deployment footage of unfolding and inflating for further reference. What he did was take his helmet-mounted camera, which normally faces forward, and put it on backwards, so that when he jumped out of the plane and opened the chute, the camera was pointed in the direction of the parachute unfurling. He did several of those and then gave all of that to MPC. It let the team see all the cloth dynamics, which are incredibly noisy and erratic and random.One thing to note, says Nash, is that free-fall cameramen typically pack their parachute to open slowly because its easier on their neck with all that weight on their head from the camera. The issue was, while we got great reference, it took way too long for the parachute to inflate, in terms of what we needed to see in our scene. They pull the chute and get yanked out of the plane, so it had to happen really quickly. So, I did a re-timed, cut down version of the deployment, and it all paid off because I think MPC nailed it.Fight on a planePrior to the parachute jump, and the actual crash, a fight ensues on the plane between Matt and Emily versus a rogue crew. The fight even continues once the plane hits the side of the mountain and ends up sliding down the snowy slope. Interior scenes were filmed on a large gimbal that held a plane mock-up. It could roll 360 degrees, advises Nash. The original sequence as shot was actually quite a bit longer. There was a whole stretch in the middle where the plane was rolling down the mountain like a pencil on a sloped table. In the finished sequence, the plane rolls up on its side and then rolls back.Back In Action. (L to R) Jamie Foxx as Matt and Cameron Diaz as Emily in Back In Action. Cr. Courtesy of Netflix 2024.For background environments, production was able to film mountains, skies and clouds from a helicopter, as well as with a drone. Says Nash: We went up the ski lift to the top of the mountain, had one of those snow cats that could get us a certain distance away from all of the ski gear, and then we had a drone up there that we shot plates with. We werent able to utilize as much of that plate photography as I had hoped, partly because the lighting conditions were constantly changing on the top of this mountain.Ultimately, the bulk of the exterior shots, including the avalanche, were realized as fully CG environments by MPC. We used some of the plate photography to build some of these environments, describes Nash. Wed pick some of the plate photography, which maybe didnt do what it needed to do in terms of camera moves, and then we said, This is the lighting condition that we want to build into all our CG environments. We did use plate photography for all the air-to-air shots preceding impact with the mountain. I think there were four or five of those, including a sunset shot that Seth absolutely loved. And Im like, Oh, but theres no sunset in any of the other shots? I think you get away with it and its a gorgeous shot.Back In Action. Behind the scenes on the set of Back In Action. Cr. John Wilson/Netflix 2024.The jet itself was CG and had to match a practical jet that was filmed for a hangar scene of the characters boarding the aircraft. In that hangar, however, the jet was plain white. When it was modelled and textured and placed into the white-ish mountainous environments, the plane was deemed too nondescript. So, recounts Nash, I did a quick little Google search and looked for a simple two-color stripe scheme that we then put on our CG jet, which meant we then had to add to the practical jet in the hangar in a handful of shots for continuity.As the plane slides down the mountain, a further consideration became the amount of damage to represent on the digital fuselage. We wanted to imply that there was a lot of scraping and denting going on outside the plane while were inside covering the fight, states Nash. So, we went to 11 on the plane damage dial and had to track the progression of it all. What helped was that I had done a CG Air Force One for Iron Man 3, and in reality that is the cleanest plane ever! Which is hard to do convincingly in CG. It does not look real. So even there we had to add dirt and oil streaks on the wings.Back In Action. BTS Jamie Foxx as Matt on the set of Back In Action. Cr. Parrish Lewis/Netflix 2024.Inside the plane, as it crashes, became a messier and messier environment of ice, snow and debris. All of it was simulated, notes Nash. We did some early tests with the special effects team but found it was really hard to control and get enough airflow through the airplane set that the stuff would enter the gash and travel all the way out the far end and not settle somewhere in the middle. So it became all-CG. That was another thing where we did a first pass and the director said, More. Okay, second pass. More. There were some things like napkins and paper that were practical. Ironically, the thing that we kept seeing over and over were these red napkins and we wound up painting most of them out because they were so distracting!The post Behind the scenes of that crazy plane fight, plane crash and parachute jump in Back in Action appeared first on befores & afters.
    0 Commenti 0 condivisioni 201 Views
Altre storie