Befores & Afters
Befores & Afters
A brand new visual effects and animation publication from Ian Failes.
  • 3 people like this
  • 132 Posts
  • 2 Photos
  • 0 Videos
  • 0 Reviews
  • News
Search
Recent Updates
  • BEFORESANDAFTERS.COM
    Watch Outposts VFX breakdown for s2 of Pachinko
    Go behind the scenes.The post Watch Outposts VFX breakdown for s2 of Pachinko appeared first on befores & afters.
    0 Comments 0 Shares 3 Views
  • BEFORESANDAFTERS.COM
    Watch Raynault VFXs breakdown for Deadpool & Wolverine
    Environments, bluescreen comps and more.The post Watch Raynault VFXs breakdown for Deadpool & Wolverine appeared first on befores & afters.
    0 Comments 0 Shares 4 Views
  • BEFORESANDAFTERS.COM
    The making of Gladiator: a look back with VFX supervisor John Nelson
    The Colosseum, tigers and more.Coming up this week is the release of Ridley Scotts Gladiator II. So, we thought wed go back to the first Gladiator with the VFX supervisor of that film, John Nelson. John of course won the visual effects Oscar for Gladiator, alongside Tim Burke and Rob Harvey of Mill Film, and SFX supervisor Neil Corbould.In this chat we dive deep into a number of the big sequences, starting with that very famous Steadicam shot of the Gladiators entering the Colosseum. We also talk about the Rome builds, the amazing tiger fight, and the forest battle in Germania. John shares a few fun memories from Oscar night as well. This is a really informative chat looking back at the VFX process from around the year 2000. I have to say also that Gladiator was one of those films that had an amazing DVD release with very very thorough VFX featurettes looking over the shoulder of artists at The Mill working on SGI machines, and working with tools like Softimage and Flameso try and find those featurettes if you can.This episode of the befores & afters podcast is sponsored by SideFX. Looking for great customer case studies, presentations and demos? Head to the SideFX YouTube channel. There youll find tons of Houdini, Solaris and Karma content. This includes recordings of recent Houdini HIVE sessions from around the world.The post The making of Gladiator: a look back with VFX supervisor John Nelson appeared first on befores & afters.
    0 Comments 0 Shares 7 Views
  • BEFORESANDAFTERS.COM
    The Polar Express is 20. Heres a fantastic behind the scenes document anyone can access
    The mocapd Robert Zemeckis film featured pioneering work by Sony Pictures Imageworks.Sure, a lot of people remember The Polar Express because of the Uncanny Valley. But the film (celebrating its 20th anniversary right now) was arguably one of the big game changers in the way it approached motion capture and virtual cinematography, thanks to the efforts of director Robert Zemeckis, visual effects supervisor Ken Ralston and the team at Sony Pictures Imageworks.The technical artistry of the film is packaged up in an extremely insightful behind the scenes document publicly available from Imageworks website. It was presented as a course at SIGGRAPH 2005 and titled From Mocap to Movie: The Polar Express, presented by Rob Bredow, Albert Hastings, David Schaub, Daniel Kramer and Rob Engle.Inside youll find a wealth of information about the motion capture process, animation, virtual cinematography, effects, lighting, stereo and moreeven the original optical flow test for the film is covered.I think its a fascinating read, and an important one in the history of motion capture and virtual production. Remember, the film came out in 2004; Avatar (which took performance capture much futher, of course, came out in 2009). Over the years, SIGGRAPH courses have been an invaluable resource for discovering the history of tools and techniques at visual effects studios. I love that this resource for The Polar Express exists.The post The Polar Express is 20. Heres a fantastic behind the scenes document anyone can access appeared first on befores & afters.
    0 Comments 0 Shares 7 Views
  • BEFORESANDAFTERS.COM
    See how JAMM orchestrated Josh Brolins interaction with an orangutan on Brothers
    Watch the JAMM VFX breakdown below.The post See how JAMM orchestrated Josh Brolins interaction with an orangutan on Brothers appeared first on befores & afters.
    0 Comments 0 Shares 7 Views
  • BEFORESANDAFTERS.COM
    Watch Untold Studios VFX breakdown for that walrus Virgin Media spot
    Digital walrus, digital boat, and digital water. View this post on InstagramA post shared by Untold Studios. (@untold_studios)The post Watch Untold Studios VFX breakdown for that walrus Virgin Media spot appeared first on befores & afters.
    0 Comments 0 Shares 7 Views
  • BEFORESANDAFTERS.COM
    Watch breakdowns from Vine FX for Paris Has Fallen
    Go behind the scenes.The post Watch breakdowns from Vine FX for Paris Has Fallen appeared first on befores & afters.
    0 Comments 0 Shares 10 Views
  • BEFORESANDAFTERS.COM
    VIDEO: Introducing Reallusion AI Smart Search
    In this new video, discover Reallusions AI Smart Search, now seamlessly integrated into iClone, Character Creator and Cartoon Animator. It provides instant access to countless models and animations from the Content Store, Marketplace and ActorCore. Choose between AI-powered Deep Search for precise results or traditional text-based search for simplicity all within the application.Brought to you by Reallusion:This article is part of the befores & afters VFX Insight series. If youd like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.The post VIDEO: Introducing Reallusion AI Smart Search appeared first on befores & afters.
    0 Comments 0 Shares 10 Views
  • BEFORESANDAFTERS.COM
    On The Set Pic: Nautilus
    The series was filmed in Queensland, Australia.The post On The Set Pic: Nautilus appeared first on befores & afters.
    0 Comments 0 Shares 10 Views
  • BEFORESANDAFTERS.COM
    Watch Framestores vis reel for Deadpool & Wolverine
    Framestores Pre-Production Services (FPS) delivered over 900 previs, techvis and postvis shots.The post Watch Framestores vis reel for Deadpool & Wolverine appeared first on befores & afters.
    0 Comments 0 Shares 10 Views
  • BEFORESANDAFTERS.COM
    More invisible VFX from Ripley
    This time from ReDefine.The post More invisible VFX from Ripley appeared first on befores & afters.
    0 Comments 0 Shares 10 Views
  • BEFORESANDAFTERS.COM
    The VFX and stop-motion animation of Beetlejuice Beetlejuice
    Including some very fun Easter eggs about the stop-motion scenes.Today on the befores & afters podcast, a look behind the scenes of Tim Burtons Beetlejuice Beetlejuice. We start with visual effects supervisor Angus Bickerton, who shares some of the overall VFX challenges, including the putting back together of Monica Bellucis Delores character, and the puppet for the Beetlejuice baby. Work by Framestore and One of Us is discussed. Angus makes particular mention of creature effects creative supervisor Neal Scanlan, too.Then we dive into the stop motion animation work by Mackinnon & Saunders, including with Ian Mackinnon, stop motion supervising producer, and Chris Tichborne, animation supervisor. Theres a lot of fun detail here about the making of the sandworm and its animation, and the plane crash. I love Chris mention of the live action reference video he shot of himself in a swimming pool where he couldnt actually tell anyone he was with what it was for. Also, theres a fun easter egg moment featuring Tim Burton on the plane. This episode of the befores & afters podcast is sponsored by SideFX. Looking for great customer case studies, presentations and demos? Head to the SideFX YouTube channel. There youll find tons of Houdini, Solaris and Karma content. This includes recordings of recent Houdini HIVE sessions from around the world.Listen to the podcast above. And, below, a video breakdown of the stop-motion scenes.The post The VFX and stop-motion animation of Beetlejuice Beetlejuice appeared first on befores & afters.
    0 Comments 0 Shares 19 Views
  • BEFORESANDAFTERS.COM
    The making of Slimer in Ghostbusters: Frozen Empire
    A new VFX breakdown from Imageworks is here.The post The making of Slimer in Ghostbusters: Frozen Empire appeared first on befores & afters.
    0 Comments 0 Shares 10 Views
  • BEFORESANDAFTERS.COM
    Behind the scenes of the Iacon 5000 scene in Transformers One
    Includes a few fun breakdowns and views of ILM Sydney.The post Behind the scenes of the Iacon 5000 scene in Transformers One appeared first on befores & afters.
    0 Comments 0 Shares 16 Views
  • BEFORESANDAFTERS.COM
    The making of Rook in Alien: Romulus
    Legacy Effects and Metaphysic combined to make the character. Excerpts from befores & afters magazine in print.At one point in Fede Alvarezs Alien: Romulus, the characters encounter a damaged android, Rook. Rook resembles the android Ash from Alien, played by Ian Holm, who passed away in 2020. With several scenes, and even dialogue, Rook would require a unique combination of a practical animatronic built and puppeteered by Legacy Effects, and visual effects augmentation by Metaphysic using machine learning techniques.For Legacy Effects, the animatronic Rook build needed to happen fast. The studio would normally look to have four to six months to make such a thing, but here they only had two. One challenge, plainly, was that they did not have the actual actor to do a live cast or 3D scan with. There were no existing molds of Ian from Alien, reveals Mahan. They certainly made one because Yaphet Kotto knocks Ashs head off with a fire extinguisher. They certainly made something, but it doesnt exist. And if it does, no one wants to admit that they have it because we searched.Below, scroll through for behind the scenes of the Rook animatronic shoot. View this post on InstagramA post shared by Amy Byron (@shmamy_b)Luckily, there was an existing cast of Holm from The Hobbit films, and certainly the original film from which to reference. That cast of Ian was done many years after Alien, of course, notes MacGowan, so all we could get from that really was the placement of his features. What we did do was make two clay portraits of his face. Andy Bergholtz and Jason Matthews did those, and then we scanned these sculptures. It was only a half face, so we scanned it and then Scott Patton digitally re-sculpted the whole thing.The Rook animatronic was then ultimately built as a creature effect that could be puppeteered. The sets had to be constructed so that the team could be hidden underneath or allow for the choreography via slots in a table when Rook is shown crawling. The animatronic also featured a less-damaged right arm that a puppeteer could perform, and then a left damaged arm that was an animatronic puppet arm. The whole body was actually a life cast of my body, says MacGowan, that was then re-sculpted with all the damage and it was all put together.Part of the performance is the delivery of lines, and for this an actor was cast and his voice recorded. Legacy Effects used the voice to program in the moves onto their Rook animatronic for playback on set. This became the basis of the character, with enhancements made by Metaphysic for eyes and mouth movement, resulting in a hybrid practical/digital approach.Its pretty satisfying to bring back that character, reflects Mahan. It wasnt easy. I think its a very admirable attempt to resurrect somebody whos no longer with us to be in a movie again. I mean, if you wouldve told us when we were walking out of the theater having seen Ash in Alien that someday we were going to make a replica of him in a different movie, I wouldnt have believed it. Its very cool.The VFX side of RookFede said to me, It needs to start as a puppet, shares VFX supervisor Eric Barba. He said, Its a broken android, so it didnt have to be perfect. It had gone through some hell, half its bodys missing, part of its face is going to be missing, but were going to have to augment it probably if we dont get it right in camera.issue #22 Alien: RomulusI fell back on what I know of head replacement and recreating CG, continues Barba, who worked on groundbreaking digital human productions such as The Curious Case of Benjamin Button and TRON: Legacy. Quite honestly, I thought Id moved away from doing that because its excruciating. I used to joke with people that I had property in the Uncanny Valley, and its really difficult to get rid of. No one wants to live there, and when you finally move out of there, you really dont want to go back. And so I said, Look, were going to make the best puppet we possibly can. Well put a headcam on our actor that well cast, well get his performance and well get the audio from that performance. On the day, well play that back for the cast so thats what theyre reacting to. But it just means you have to have all those things done ahead of time and be happy with those choices. Its easier said than done but thats exactly what we did.As noted, Legacy Effects delivered a Rook animatronic puppet for use on set for filming in Budapest. The plan, then, was to augment the puppets movements digitally. Our puppet was never going to look photorealistic from its mouth movements, advises Barba. We wanted the stuff coming out of its side, too. Initially, we settled on a 3D approach but that approach became time consuming and costly, and we were on a modest budget and a shortened back-end post schedule.Fede felt strongly about the deepfake technology, adds Barba. I actually brought a wonderful artist into post, Greg Teegarden. I said, Look, I want you to do deepfake just on the eyes for our preview screenings and lets see. We were very lucky that we got the studio on board and we pulled the original 4K scan of Alien, of all the Ian Holm photography. We started building a model, and we used that model to do the initial directors cut. We had something there other than the puppet. And I cant tell you how exciting that was when we first saw stuff. ILM also did a test and it brought that puppet alive and Fede felt even more strongly about how we should do this.To finalize the Rook shotsknowing that budget and timeline were criticalBarba then called upon his former boss Ed Ulbrich, now chief content officer & president of production at Metaphysic, which has broken into the machine learning and generative AI space, including with digital humans. Says Barba: I was super excited about what they could offer, and I said, Well, lets do a test and show Fede. And thats what we did, and thats what led us to using Metaphysic, which really helped us solve a lot of problems.They have amazing AI tools that you cant do with just a deepfake or even without more 3D trickery, says Barba. They could re-target our eyelines. They could add blinks, they could make adjustments from the head-cam footage. They wrote software to drive our solve and then they could dial in or out the performance if Fede wasnt quite happy with it. Metaphysic was able to give us those tools, and I think they did a great job. We threw them a lot of curve balls and changes.One particularly challenging aspect of Rook was the many lighting conditions the android appears in, as well as being displayed on black and white monitors on occasion. The thing that surprised me the most was how well the monitor shots worked right out of the box, comments Barba. Fedes mantra was going back to the analog future. Everything needed to have that look.To get the look, the director sought out a specific JVC camera that had been used on Alien (1979). Fede loved the look of the head-cam shots and monitor shots, notes Barba, especially that burning trail you see sometimes in 1980s music videos. He said, Ah, weve got to match that. So we did. We literally got that camera and we started shooting with it in principal photography. And then it broke! It lost its ability to focus. Everything started becoming soft. We were in Budapest and it was the only one we could find and no one knew how to fix it. So, we ended up shooting it on other cameras and then Wylie Co. matched the look and did all the screens to keep it concise and cohesive throughout. They did a great job making that look work.Relating also to those monitor shots of Rook was the fact that the animatronic had been filmed without a CCTV-like camera positioned in the frame, that is, without something that would show how a monitor shot of Rook would be possible in the first place. So, a camera was added in via visual effects. And the artist responsible for that work wasnone other than the director, Alvarez. (Its worth looking back at Alvarezs own early days in VFX and directing at his YouTube page, something he discussed in detail at the recent VIEW Conference). Below, from VIEW Conference, a shot of the Rook animatronic without the camera in place, and one where it has been added to the scene.Go further into Alien: Romulus in the print magazine.The post The making of Rook in Alien: Romulus appeared first on befores & afters.
    0 Comments 0 Shares 29 Views
  • BEFORESANDAFTERS.COM
    Video to 3D Scene tech showcased in Wonder Animation beta release
    Its part of the AI toolset from Wonder Studio that will let you film and edit sequences with multiple cuts and various shots and then be able to reconstruct the scene in 3D space. Wonder Dynamics, an Autodesk company, has launched the beta of Wonder Animation, a new tool in the Wonder Studio suite that transforms video sequences into 3D-animated scenes. Capable of handling multiple cuts and complex shots, Wonder Animation reconstructs these scenes in a 3D space. And then makes it full editable.Its now available to Wonder Studio users. You can find more info in Autodesks blog post and in the video below. The post Video to 3D Scene tech showcased in Wonder Animation beta release appeared first on befores & afters.
    0 Comments 0 Shares 37 Views
  • BEFORESANDAFTERS.COM
    Gladiator II SFX supervisor Neil Corbould convinced Ridley Scott to reimagine the rhino
    Special effects supervisor Neil Corbould on finding some old storyboards from the abandoned rhino sequence in the first Gladiator film and showing them to Ridley Scott for Gladiator II, who decided to include the rhino in his new film.The post Gladiator II SFX supervisor Neil Corbould convinced Ridley Scott to reimagine the rhino appeared first on befores & afters.
    0 Comments 0 Shares 29 Views
  • BEFORESANDAFTERS.COM
    On The Set Pic: Uprising
    Credit: Lee Jae-hyuk/NetflixThe post On The Set Pic: Uprising appeared first on befores & afters.
    0 Comments 0 Shares 28 Views
  • BEFORESANDAFTERS.COM
    The visual effects of Percy Jackson and the Olympians
    Visual effects supervisor Erik Henry on using ILM StageCraft, and on the many creatures of the show.Today on the befores & afters podcast, were chatting to visual effects supervisor Erik Henry about the Disney+ series Percy Jackson and the Olympians. Its a show with a multitude of creatures and also one that has utilized ILMs StageCraft LED volume and related tech for filming.Erik goes into detail about how various creatures were filmed on set with stuffies or partial make-up effects and bucks, and then about how the vendors created the final CG versions. Some of those vendors were ILM, MPC, Raynault FX, Storm Studios, Hybride and MARZ.Check out some shot breakdown stills below.Chimera previs.Chimera background plate.Chimera fire element.Chimera final comp from MPC.Minotaur motion base.Minotaur animation dev.Minotaur final shot by ILM.The post The visual effects of Percy Jackson and the Olympians appeared first on befores & afters.
    0 Comments 0 Shares 42 Views
  • BEFORESANDAFTERS.COM
    The making of U2s Vertigo
    A newly released behind the scenes doco showcases BUFs work for the music video.The post The making of U2s Vertigo appeared first on befores & afters.
    0 Comments 0 Shares 34 Views
  • BEFORESANDAFTERS.COM
    How miniatures were made on Alien: Romulus
    Miniature effects supervisor Ian Hunter and Pro Machina teamed up for the film. An excerpt from befores & afters magazine in print.As well as creatures, Alien: Romulus features a number of space environments, the Renaissance research station, the Corbelan hauler, the Weyland-Yutani Echo space probe and other spacecraft. Some of these elements were initially considered as effects tasks that could be handled with miniatures, as VFX supervisor Eric Barba relates. We really wanted to shoot as much as we could as miniatures, but at some point the budget and number of days you have to shoot pokes up. And then also the action we wanted to stage didnt lend itself to an easy motion control shoot with miniatures. View this post on InstagramA post shared by PRO MACHINA (@promachina)Ultimately, the Corbelan and the probe were built in miniature by Pro Machina Inc., with Ian Hunter as miniature effects supervisor. We shot half a dozen shots but in the end were able to use just a few shots of the Corbelan model, states Barba. The probe was built practically but it was entirely digital in the film. The thing is, we got such great models to use from amazingly talented model makers and that gave us exactly what the CG team then had to do to match them.In terms of the work by Pro Machina, Gillis and his co-foundersCamille Balsamo-Gillis and Reid Collumspartnered with Hunter to build the Corbelan hauler and the Weyland-Yutani Echo space probe as models for the film. Pro Machina came about from a desire to, Gillis explains, have under one roof the ability to build all sorts of miniatures, as well as the creature effects and props. I invited Ian to come in as a freelance VFX supervisor and also keep working with Camille, who had been his producer on several projects already.issue #22 Alien: RomulusWhen I told Fede that we also build miniatures and that I had two-time Oscar winner Ian Hunter, Fedes eyes lit up. I mean, it was not just me recommending Ian. I dont want to take credit for that because Ians work stands on its own. In my opinion, he is the premier miniature effects and VFX creator. So, they took it from there. I provided the space and the structure, but its Ian and Camille who run the show.As noted, the two miniaturesthe Corbelan and the probewere built and then filmed for a number of shots, some of them against LED walls for light interaction. Those two models were scanned and used as the basis for the digital models that ILM created, describes Gillis. I think that ILM did a spectacular job with them. Theyre very tactile looking. Of course, the foundation of them are the actual miniatures, which were so great. I liked the approach, where we started with something practical. View this post on InstagramA post shared by PRO MACHINA (@promachina)In fact, I think we need more of the hand-off happening with models because they still have a tremendous amount to offer. Our practical work is a 120-year-old craft, and it is real. So, lets use it where we can. Lets use the right tool for the right moment. I just hope the fans appreciate the hand-off because I dont want to diminish anyones art, I want to enhance. Thats what the goal always is.Go further into the film in the print magazine.The post How miniatures were made on Alien: Romulus appeared first on befores & afters.
    0 Comments 0 Shares 37 Views
  • BEFORESANDAFTERS.COM
    Roadtesting Rokokos Smartgloves and Coil Pro
    Matt Estela fires up this motion capture kit from Rokoko for a test run of their Smartgloves and Coil Pro.You may know Matt Estela from his Houdini activities, or his incredible CG resource CGWiki. Matt is currently a Senior Houdini Artist at Google and previously worked in VFX and animation at Animal Logic, Dr. D Studios and Framestore CFC.Matt likes tinkering with new tech, so I asked him to road test Rokokos Smartloves and Coil Pro; two motion capture offerings from the company. Heres how Matt broke down the tools.(And, yes, befores & afters ON THE BOX series is back!)TLDR; its goodIt captures fingers nicely, position tracking with the Coil Pro works great, calibration is fast, the capture software is easy to use and exports very clean FBXs. Its a little pricey as a full package, but worth it all things considered, and Rokoko support is great.My BackgroundWhile Im known as a minor Houdini celebrity in very small social circles, I actually started in 3D as a character animator in 1999/2000. It took about a year to realize I didnt have the patience or dedication for it and moved into more technical roles, but 24 years later I still love and appreciate quality animation.My move into tech plus my background as a failed animator meant when Ian offered these gloves to review I jumped at the chance.Tech BackgroundBroadly, mocap tech falls into several categories:Dedicated opticalDedicated IMUMachine learningAdaptation of smartphones and VR headsetsAt its core, mocap needs to know where a joint is in 3D space. Optical uses multiple cameras to identify bright dots on a suit, triangulates where those dots are based on all those cameras, and then calculates an absolute position of where that dot is. While optical is very accurate, it is also very expensive; these systems require special high speed cameras, ideally as many as possible, with associated dedicated infrared lighting, which all need to be carefully calibrated in a dedicated performance space.IMU (Inertial Measurement Unit) systems like Rokoko dont directly solve the absolute position of joints, but calculate it from acceleration. Cast your mind back to high school physics, and remember how position, velocity and acceleration are linked. Velocity is a change in position over time, and acceleration is a change in velocity over time. IMU sensors measure acceleration, you can take that acceleration and run those high school equations in reverse; use acceleration to get velocity, use velocity to get position. Because IMUs are self-contained they dont require cameras, meaning they dont suffer from the occlusion issues of optical systems. While IMU systems are not as accurate as optical, they are substantially cheaper.Machine learning has been a recent addition to the space, where they guess the pose of a human based on training data. They produce adequate results for real time use, but to achieve the quality required for film and games require offline processing in the cloud, which can be a concern for some.The final category is adapting smartphones and VR headsets. Both have cameras and IMU sensors on board, and also increasingly feature on-board machine learning for hand tracking and pose estimation. Quality is variable, and are limited to motions that can be comfortably done while holding a phone or wearing a headset.SmartglovesIn 2020 Rokoko launched the Smartgloves, one of the first commercial systems to offer hand tracking at a reasonable price point without requiring the skills of a dedicated motion capture facility. It also offered the ability to integrate with the smartsuit to provide an all in one solution for body and hand mocap.I had the chance to test these gloves shortly after launch. My experience with mocap at that point was a handful of optical systems for some university research projects, and dabbling with some smartphone systems for facial capture and early VR apps for hand and head position capture.This put me in an interesting space; I hadnt tried any IMU systems, and so was judging the gloves based on experience with the optical body capture and VR hand capture systems mentioned above.I tested them for a couple of weeks, and my initial verdict was oh, theyre ok I guess. The gloves did exactly what they were designed to do, capture fingers, but as someone who talks with their hands a lot, my expectation was that the gloves would capture full hand gestures, which if you think about it, means understanding what the wrists and elbows are doing for full Italian style gesticulation silliness.Further, because I was only wearing the gloves (and clothes, cmon), it was natural to try and focus on hand centric animation; clapping, typing, steepling fingers etc. Again, the gloves in their original format arent really meant to do this. Think about the limitation of IMU, theres no ability to know where the hands are relative to each other, they cant detect if youre really perfectly still or moving veerrryyy slowly at a constant velocity.This all manifests as drift; do a clap for example, very quickly the hands end up intersecting each other. Hands on a desk will slide away, overall body pose starts to rotate etc. If your needs are broad body gestures maybe this is fine, especially for Vtubers and similar fields where high accuracy isnt an issue.At its core, IMU on its own is incapable of the accuracy needed to capture hand gestures. Again back to high school physics, that process of acceleration -> velocity -> position is affected by sensor accuracy and the limits of real time calculation. The numbers arent perfect, the sensors arent perfect, meaning results drift. Theres ways to compensate for this, e.g. the smartsuit understands the biomechanics of a human skeleton to make educated guesses of where the feet should be, how knees should bend, and if paired with the gloves, can drastically improve the quality of the hand tracking. But without the suit, and without other sensor data, two handed gestures would always be difficult.Rokoko themselves of course know about the limitations of IMU, and had plans to augment this.Coil ProFast forward a few years, and Rokoko released the Coil Pro. This uses another technology EMF, or electromagnetic fields, in conjunction with IMU, to be able to calculate worldspace positions. It promised results like the worldspace positions of optical, without the occlusion issues of optical, and especially without the cost of optical.Rokoko mentioned this was coming soon back in 2020, time passed, I forgot. In 2024 they got in touch again, unsurprisingly getting it to market took longer than expected, and asked if Id be interested in trying again, of course I was.SetupThe coil arrived, about the size of a small guitar practice amp. Special mention has to be made for the unboxing process, an amusing bit of showmanship:The install process was pretty straightforward; connect it via USB to your computer to register it, then disconnect it. It doesnt require a permanent connection to your computer, only power (which is also delivered via USB), so its easy to move to a convenient location.A new pair of Smartgloves also arrived with the Coil, they needed to be registered and have firmware updates. This took longer than expected, mainly because Im an idiot who didnt read the manual carefully enough; the gloves need to be updated one at a time. Special shout-out to Rokoko support who were very helpful, and logged in to my machine remotely to identify issues. Everyone at Rokoko pointed out I wasnt getting special treatment, this is the level of service they offer to all their customers.Once the gloves were updated and registered, the final setup step was how you bind the gloves to your avatar within the Rokoko software. By default the gloves float free in virtual 3D space, which worked, but the results felt a little drifty and strange. Again my dumb fault for not reading the manual, support advised me to bind the gloves to a full body avatar, despite not having a full Smartsuit.Suddenly the result was a lot more accurate. My understanding is that when linked this way, the software can use biomechanics to make better estimates of the wrist and arm positions, leading to a much more accurate result.In useWith everything setup, I was impressed at how invisible the process became. Previous mocap tests with optical and smartphone/vr headset systems constantly reminded me of their limitations; occlusion with optical will guess strangely, ML systems will often do a plausible but incorrect guess of limb locations. With the Smartgloves and Coil, I never got these glitches, it just feels like an on screen mirror of my actions.Calibration is very straightforward; hit a neutral pose, hold it for 3 seconds, done. Calibration for optical systems has taken a lot longer. Once calibrated, hit record, do a motion, then stop recording. You can review the action immediately, re-record if you choose.Exporting to FBX is very easy, and loaded into Houdini without issues.Example: Testing a laptopMany times Ive had ideas for little animations, but Id get 10% into the process, get bored, stop. Similarly Id have ideas for stuff that I might film (I was one of the dweebs who got one of the first video capable DSLRs thinking Id be the next Gondry), but again the effort to finish something was just too much.Once the gloves were setup and I could see my avatar on screen, I started testing scenarios in realtime; how cartoony could I move my arms, how did the body adjust based on the hand positions, how well typing was captured. Quickly I improvised a scenario where testing a laptop goes wrong, the tester reacts, panics. I could let it record, try a few different takes, play it back. It was the ideal blend of animation output, but with the spontaneity of improv and acting.The limitations of doing full body capture with only gloves led to some fun problem solving. How would the character enter and exit? I couldnt capture walking easily, but maybe theyd be on a segway? Again I could test quickly, export a clip, blend it and the previous take in Houdini, be inspired, try something else. Heres the end result:Example: NewsreaderA friend was curious about the gloves, so I asked what sort of motion they might record. He said a newsreader talking to camera, shuffling papers, that sort of thing.Out of curiosity I timed how long it took from getting the suggestion to having it playback in Houdini; it was 5 minutes. 5 minutes! Amazingly, 2 minutes of that time was waiting for a Rokoko software update. The result was perfectly usable, glitch free, no need for cleanup. Thats pretty amazing.What surprised me with this test was how the Rokoko software animated the body, even though I was only recording motion via the Smartgloves. The software uses the hand data to estimate what the rest of the body is doing; its possible to split off the hands in a DCC, but not only was the body estimation pretty good, it had way more character than I expected.Comparing to alternativesFull disclosure, as stated earlier Im not a mocap person, so what follows is results of some quick google/youtube research.The main competitors appear to be Manus and Stretchsense. A key differentiating factor is Manus and Stretchsense are designed to be used with another mocap system, while Rokoko are pushing a unified body+gloves package.As such this makes direct comparisons a little tricky. All 3 systems track fingers, but to get accurate hand positions where collisions matter, all need augmentation; Rokoko via the Coil, Manus and Smartsense from an optical system like an Optitrack. If the Manus or Smartsense are paired with a IMU system like Xsens, their ability to track claps and other two handed gestures will be limited.Cost is relevant here too, the Smartgloves and Coil combination is cheaper than either of the top of the line options for Manus or Smartsense, and the 2 later options would still require another system to do accurate positional tracking. Theres analogies to be made here to the mac vs pc world; Rokoko are pushing a single Apple style ecosystem, while the others are modular and designed to work with a few different systems.Moving away from dedicated systems, theres the option of using a Quest headset to track hands. The Glycon app is cheap and cheerful, but suffers the same issues of occlusion; if the cameras on the headset cant see your fingers, it will guess what your fingers are doing, often incorrectly. The location of the cameras means your most neutral handby-sides idle pose is not tracked well. Further, while a mocap suit+gloves setup is designed to handle extreme motion, a VR headset is not, so youre limited to gestures you can do comfortably and safely while wearing a high tech shoebox on your face.The final alternative is keyframing hand and finger animation manually. Hand animation is invisible when done well, but draws attention to itself when done poorly. Like faces, our brains are so tuned to the behaviour of hands that we spot bad hand animation immediately. To get hand animation comparable to the results I captured, even for relatively simple idle and keepalive animations, would take hours to keyframe. If you require lots of hand animation, that time (and artist cost) adds up quickly.As a very rough matrix of options, the full list looks like this:Other thoughtsWas interesting chatting with Jakob Balslev, the CEO of Rokoko. It reminded me of the old adage of the difference between theory and practice:In theory there is no difference but in practice there is.The basic theory of using IMU and EMF for motion capture makes sense, but the engineering work required to make it happen, to get to manufacture, to hit a price point, is huge. Hardware is way harder to develop than most of us take for granted. Jakob quipped we would probably have never started on if we knew how hard it would be but now we are glad we did!. It was also interesting to hear how lessons from each product informed the next, so the gloves are better than the suit, and the coil is better than both. The tricky part is theyre all meant to work together, an interesting balancing act. Rokoko definitely seem to love the work they do, and are constantly refining their products with both software and firmware updates.ConclusionAs I said at the start, its good. It solves many of the issues that exist with older or cheaper hand setups, while avoiding the cost of more advanced setups. I was impressed that while all mocap gloves are expected to only track fingers and maybe some wrist rotation, I was able to get some fun and plausible full body mocap with very expressive arm animation. If your mocap needs are largely fingers and hand based, and occlusion issues with AI or Quest setups have bothered you, the smartgloves and coil are an ideal solution.Bonus: testing with full suit at UTS ALAId been chatting with friends at UTS ALA who had the Smartsuit and Smartgloves. As far as we could tell the studio space is made of magnets, iron filings, van der graf machines, as a result the system never worked as well as they hoped. Alex Weight, the creative lead at ALA, is a very experienced film character animator and director, and found that while the system might have been ok for, say a Vtuber, it wasnt at the level he needed for previs; hands would drift through each other too easily, legs would kick out at strange angles, no matter how much calibration or wifi adjustments they made.Rokoko were happy for me to pop down with the Coil and test. Given their previous results the team at ALA were a little sceptical, but after the Coil was set up, the difference was astonishing. Practically no drift, worldspace positions of the hands remarkably clean. We got increasingly ambitious with tests, so holding props, picking up a chair, leaning a hand on a desk, all worked exactly as expected. I know that Rokoko are working on a big firmware upgrade for the suit that will improve results with the Coil further still.Do you have a product (hardware or software) that youd like to see in befores & afters On The Box? Send us an email about it.The post Roadtesting Rokokos Smartgloves and Coil Pro appeared first on befores & afters.
    0 Comments 0 Shares 50 Views
  • BEFORESANDAFTERS.COM
    iClone 8.5 Free Update SIM Builder with Prop Interaction & Smart Accessory
    Digital Twin and Crowd Simulation with Smart Environment Interaction.The automation of animated crowds has been widely used in the movie and entertainment industries. As the demand for digital twin, machine learning, and AI training grows, a new industrial revolution is emerging, particularly in areas like autonomous driving, smart surveillance, factory automation, and intelligent consumer devices.These fields require sensors or cameras not only to see but also to understand the world around them, especially in recognizing human behaviors. To achieve this, there is a growing need for applications that can generate realistic scenarios for AI training purposes, taking into account various angles, lighting conditions, and occlusions. This development is bringing the automation of 3D crowds to a new stage, with a higher level of interaction and realism.As a leader in 3D character animation, Reallusion has combined its expertise in character generation and automated animation, transforming iClone into a simulation platform. With the latest release of iClone 8.5, users can rapidly build live environments, from crowd simulation to world interaction, with the ability to automatically load and manipulate accessories.iClone 8.5 New ReleaseiClone 8.5 introduces two key innovations: World Interaction and the Smart Accessory system. Building on core features like Motion Director and Crowd Sim, these enhancements empower the creation of dynamic environments. In these interactive spaces, 3D characters can explore, operate props, and seamlessly load and manipulate accessories through interactive triggers or motion files.WORLD INTERACTION: Engaging Environments with Interactive MD Props and Intuitive ControlsMD Props presents advanced features designed to complement iClone Crowd SIM, marking a significant leap forward in ways that 3D actors interact with their virtual environments.Give Life to PropsMD Prop Tools gives iClone characters the ability to interact with 3D environments. By replacing the proxy components with custom 3D models, MD props manifest into visceral objects that 3D actors engage with. All manners of interactive behavior can be generated from just five templates representing the core of MD TOOLS.Intuitive Radial MenuMD Prop enhances user experience through its intuitive Radial Menu system, facilitating easy access to customizable, multi-level command groups and assignable hotkeys. Additionally, MD Prop introduces Self Interaction, enabling 3D actors to perform gender-specific and profile-adaptive actions such as using a phone or smoking, enhancing realism.Action List & Concurrent BehaviorsAction Lists support the chaining of multiple motions to extend action sequences. When the scene simulation is paused, individual Action Lists can be assigned to every character in the scene. Upon playback, all the appointed characters will move simultaneously according to their own set of instructions.Prop CustomizationCustomizing an MD template tool is straightforward: simply replace the proxy objects with a 3D model of your choice. Once the prop appearance is finalized, you can assign new animations or adjust the actors associative movements with minimal tweaks. This includes repositioning reach target positions to account for hand placements and varying arm lengths, as well as adjusting the look-at point to focus the character on specific features of the prop.Object AnimationMD Prop offers robust tools for animating itself. Creators can freely move, rotate, and scale any 3D item, with additional options for object morphing and particle effects. Morph Animator in iClone allows for intricate morph animations, while PopcornFX enables particle effects for added realism. Additionally, texture UVs can be adjusted and scrolled to create dynamic visuals, and video textures can be used to mimic interactive slideshows or film sequences.Reactive Crowd BehaviorBeyond just triggering interactive behavior from individual actors, the MD Prop system enables autonomous crowd behavior. Crowd characters no longer roam aimlessly; they can now gravitate toward points of interest and even explore various props around the scene. Groups or individuals can find a spot to sit, visit the vending machine, or gather around a busker showunleashing endless possibilities for dynamic crowd interactions.Smooth Motion TransitionsiClone Turn to Stop and Multiple Entry functionality simulate natural human behavior in approaching and engaging with a target prop. This includes slowing to a stop, making a turn, and approaching with measured pace, distance, and orientation. MD props take care of the rest with smooth and natural interactive motions that are tailored to the gender and characteristics of the interfacing actor.Multi-Platform Export & RenderExporting these creations is seamless, with iClone Auto Setup plugins available for 3ds Max, Maya, Blender, Unreal Engine, and Unity. Live Link support also allows for synchronization with Unreal Engine and NVIDIA Omniverse, providing a two-way workflow thats essential for modern animation and game development pipelines.Learn more about World InteractionSMART ACCESSORY: Easy Motion Editing and Automatic Accessory AttachmentSmart Accessory is a cutting-edge system that dramatically streamlines the process of editing motions and integrating accessories in animation projects.MotionPlus Pairing with Dynamic AccessoriesThe iClone MotionPlus format seamlessly integrates facial performance and accessory metadata for complex animations like cycling and skateboarding, where precision in motion and accessory alignment is essential. MotionPlus automates accessory attachment, randomizes models and materials, and ensures perfect synchronization between character movement and accessory interaction.Creating Smart AccessoriesThe Smart Accessory system offers robust features for creating and customizing accessories. With Motion-Accessory Pairing, animators can assign multiple accessories to specific motions, restoring default accessories or assigning new ones as needed. This flexibility allows for highly personalized animations, where characters can interact with various accessories in a lifelike manner.Creating Animated AccessoriesThe ability to synchronize human motion with accessory animations adds a new layer of realism to animated projects. The Smart Accessory system simplifies this process, allowing for seamless data handling and enhanced visual fidelity.Flexible Motion ControlsOne of the most powerful features of the Smart Accessory system is its support for a wide range of motion controls, which are essential for crowd simulation and interactive animations made by Motion Director.Learn more about Smart AccessoryACTORCORE Library of 3D AssetsDiscover a vast collection of high-quality 3D content in ActorCore. From mocap animations and hand-keyed motion to fully rigged characters, accessories, and propseverything you need to enhance your scenes is in store. Why wait? Take your 3D simulations to the next level with this extensive content library today.MD Prop Expansion Packs: Indoor & Outdoor InteractionThe STAYING AT HOME and DOWN THE STREET expansion packs provide numerous ways to showcase the versatility of MD Props. Each prop includes both male and female animation sets, offering unique gender-specific performances. Easily swap the MD Prop placeholders with your custom models or adjust the animations to fit different interactive scenarios. The iClone 8.5 Grand Release is a free update for all iClone 8 owners! New users can download the Free Trial to experience advanced virtual simulation with intuitive character controls, motion editing, and Smart Accessories.Brought to you by Reallusion:This article is part of the befores & afters VFX Insight series. If youd like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.The post iClone 8.5 Free Update SIM Builder with Prop Interaction & Smart Accessory appeared first on befores & afters.
    0 Comments 0 Shares 56 Views
  • BEFORESANDAFTERS.COM
    Starting a startup in the world of VFX training
    Urban Bradeko, founder of DoubleJump Academy, discusses launching a VFX training company and what it took to get there.Today on the befores & afters podcast were chatting to Urban Bradesko, CEO and Founder of DoubleJump Academy. DoubleJump is a VFX and CG learning platform that leans heavily into Houdini and Unreal Engine.I was interested in talking to Urban about the Academy not necessarily about specific courses, but about the challenges in starting up a VFX training company. So, we talk about getting started, why he thinks DoubleJump is different than whats already there, whats tricky in the current VFX climate, crafting a community in this industryandyes, how to deal with AI.The post Starting a startup in the world of VFX training appeared first on befores & afters.
    0 Comments 0 Shares 36 Views
  • BEFORESANDAFTERS.COM
    Watch Barnstorms VFX breakdown for Deadpool & Wolverine
    The post Watch Barnstorms VFX breakdown for Deadpool & Wolverine appeared first on befores & afters.
    0 Comments 0 Shares 87 Views
  • BEFORESANDAFTERS.COM
    Why dont we just put a facehugger on top of a radio-controlled car and drive it around?
    The wide variety of practical facehuggers made by Wt Workshop for Alien: Romulus. An excerpt from befores & afters magazine in print.The central poster for Alien: Romulus features one of the human characters being dramatically hugged by a facehugger, the film franchises parasitoid, multi-limbed alien creature.It certainly gave a clue as to what audiences could expect in the Fede lvarez movie. What the audience would eventually see was a whole host of facehuggers menacing visitors to the abandoned Weyland-Yutani research station, the Renaissance.The on-set facehuggersof which there were several in variety, including ones that scamper along floors and walls, and others that could enter a human host via its mouth and a long proboscis and even ones that were remote-controlled carswere realized by Wt Workshop for the film, under creative lead Richard Taylor.Wt Workshop also built the F44AA pulse rifle used by Rain (Cailee Spaeny) in the film. Here, Rob Gillies, general manager of manufacture, and Cameron May, a supervisor of robotics and animatronics, explore with befores & afters how Wt Workshop brought the practical facehuggers to life, and manufactured the pulse rifle.issue #22 Alien: RomulusThe facehugger buildDrawing upon concept designs for the facehuggers established by production, Wt Workshop set about a build methodology for the many varieties needed for the film. What was apparent on Romulus was that the facehuggers are all through it, declares Gillies. We ended up delivering 73 facehuggers to the show, which is an incredible number. They ranged from animatronic facehuggers to what we call comfort huggers that youd wear on your face with breathing mechanisms, to static prop facehuggers. There were all these different variations and iterations that were called out in the script that we then developed a build list of. From there we could design and build these creatures specifically to the needs of the show. A lot of these breakdowns of the specific gags and builds were masterminded by Joe Dunckley, one of our manufacturing art directors.The build would be driven primarily around both practical and aesthetic considerations, as May points out. We paid specific attention, for example, around what the knuckle joints look like. How were they going to be big enough so that we could actually practically make these things work? How is the skin going to interact with the mechanisms underneath? We were actively thinking about those things and as we were trying to refine the design aesthetic around it, we were trying to already formulate a plan for how we were going to build these things and turn them into practical puppets so we didnt back ourselves into a corner.3D printing and the generation of mass molds for large-scale casting reproductions allowed Wt Workshop to produce so many of the critters. To get the product out wasnt the heart of the challenge, notes Gillies. For us, it was ensuring that the facehuggers actually looked lifelike and could actually wrap around someones head or breathe with the performers body. That was actually the true tricky part.To ensure that occurred, it was vital for Wt Workshop to break down the specific gags that the facehuggers would be required to perform. We visually broke those and mapped those down, outlines May. We said, Right, thats going to have a movable joint over here and this is going to have this type of control. And were going to have rods that are going to go on here. Or, this is going to have this type of digital mold that were going to use to create a silicone cast from. Even though theres such a complex array of them, we were able to break those things down so we had a nice structure in terms of how we were going to approach them. That ended up working really well.In general, the facehuggers were crafted with an aluminum interior armature, 3D printed nylon joints and silicone skinwith different additional materials used depending on whether the creatures were animatronic or more static. Even though the facehuggers movements are quite different, a lot of their end joints were identical just to keep a design language that was quite consistent amongst them, says May.Go much further in-depth on the facehuggers in the print magazine article.The post Why dont we just put a facehugger on top of a radio-controlled car and drive it around? appeared first on befores & afters.
    0 Comments 0 Shares 97 Views
  • BEFORESANDAFTERS.COM
    Youre going to get slime on yourself, OK
    How Legacy Effects crafted Xenomorphs, cocoons, Rook, the Offspring and a lot of slime for Alien: Romulus. An excerpt from befores & afters magazine in print.For James Camerons 1986 Aliens, Legacy Effects supervisors Shane Mahan and Lindsay MacGowan had both worked on the Stan Winston creature effects crew, including on the imposing alien Queen for that movie.More than three decades later, they would returnalongside fellow Legacy Effects supervisor J. Alan Scottto produce a number of creatures and practical effects for Fede lvarezs Alien: Romulus.These creatures included the Xenomorphs, birthing cocoons, an animatronic Rook, and the Offspring.issue #22 Alien: RomulusRe-visiting the XenomorphsLegacy Effects made a number of Xenomorphs for the film; a full-sized animatronic, a lighter-weight animatronic bunraku puppet, blow-apart marionetted pieces for the zero-gravity explosion, and two suits that were worn by actor Trevor Newlin that bridged the different Xeno puppets and pieces. They also worked on the Xenomorph retrieved during the films prologue.The Xenormorphs, of course, had to appear recognizable to audiences, but were also creatures that Legacy Effects could take to new places, since they were intended to have been genetically engineered from the capture of the creature from Alien. As soon as we met Fede, recalls MacGowan, we started talking about the whole franchise and how the aliens had evolved over all the decades. What was so great about the original was how imposing it was and how it moved. It was slow and it didnt care. So, from a design perspective, we wanted to make sure that we kept it really tall and, in fact, taller than the original to get back to that really imposing aspect of the character.Then, in terms of color, continues MacGowan, from Alien3, the Xenos had become much more brown in tone, and we wanted to get back to the dark, imposing, bug-like aspect of it. Fede was all about that. For the texture of our Xenos, instead of it just being the smooth, slick look that so many of the Xenos have, Fede wanted it to feel like it would be like having little razors cutting you. We added beetle-like texture to the surface of it per his request, and I think it really worked really nicely.This differentiated from the high-gloss look to the Xenos that the Stan Winston creature team had done on Aliens. That was very, very smooth, very shiny, very reflective surfaces, says Mahan. Well, this was the complete opposite. It was very, very sharp and very edgy and very coarse, except for the dome. The dome remained very, very glossy. But we did want to return to basic elements of the subliminal skull under the dome. We wanted to return to the metallic teeth, which are vital to the first Xenomorph, since this film takes place between the first film and the second film.Another design aspect implemented in the Xenormorphs this time around came from inspiration from the Aliens Queen creature effects, as MacGowan shares. We always loved the way the Queens face came out of her cowl and she could actually move her face around. The original Xenos dont have that. Youd have to move the entire head to get it to move around. So for our Xeno, the face could move side to side, up and down, and it has a shingling of the dome. It is clear but they shift over one other. This way we got a little bit of the Queen into the Xeno.The dome, in particular, was further made to appear translucent, as were parts of the panels in the Xenomorph arms and legs. This, again, was something new the Legacy Effects team could bring to the creatures. When we did Aliens, we just didnt have the ability of materials back then to accomplish what we wanted in terms of translucent materials, notes Mahan.Theres a small section of the top of the Queens head, right at the top of her head for about eight inches, which is a urethane piece thats clear but it goes back into fiberglass. Back then, I wasted a lot of money and time trying to make a sublevel head. There was a whole head that had a top sublevel to it that had a secondary level of detail that got abandoned because it just became too difficult and too heavy to manufacture where the cowl of her head had the exterior shape, but then it had a sublevel of Giger-esque detail under it that was cast and painted, then it had a urethane layer on top. It was just so tricky to do because it added a hundred pounds to the thing or more. We just gave up because the mechanics were too rudimentary at the time. We couldnt get the clear face to work. We couldnt get a lot of things to work just because it was 1986.Cut to 2024, and this idea of employing see-through panels was re-visited for Romulus. The idea was that the see-through panels were amber that you can actually see through on the arms and the legs and through the face, says Mahan. They were made of clear silicone tendons and transparent pieces.In addition, smoke stacks on the Xenmoorphs were made to be translucent, with MacGowan observing that when the light catches those, you can see through it a little bit. And that was a first for the Xeno.Another thing Fede suggested, adds Mahan, was a vibrating device in the neck of the Xeno that makes the throat pulsate. Its just a very subtle but interesting effect and its visible. You just see this shifting of skin. Little things like that add an overall effect which give it some life.Numerous sketches, concept art pieces and digital models made up the design of the Xenomorphs. The concept artists and digital design crew at Legacy Effects would then mechanically build the Xenomorph in full size in rudimentary shapes. We then scanned that full size robotic build, describes Mahan, and we took the digital model of the Xeno and just slightly massaged it into place so that it fit perfectly over the shapes, before we manufactured the skins to fit, so that they just fit snugly over the mechanics. Its a little bit like reverse engineering to make sure all the mechanics we would make inside fit properly and did not distort the figure.For the Xenormorph internal structures, Legacy Effects built in a number of internal channels that would pump fluid around. That was done with an on-off switch radio control to have two different densities of fluid that we could just control from afar, discusses Mahan. You could turn them on, you could turn them off. Youre not running in and adding fluid to the mouth and stepping away and getting in the way. That was something we had to do on Aliens and I wish we wouldve had the ability to do this remote technique back then.Go much further into Legacy Effects work on the film in the print magazine.The post Youre going to get slime on yourself, OK appeared first on befores & afters.
    0 Comments 0 Shares 99 Views
  • BEFORESANDAFTERS.COM
    See Mackinnon & Saunders stop motion in Beetlejuice Beetlejuice
    Sandworm, plane sequence and more!The post See Mackinnon & Saunders stop motion in Beetlejuice Beetlejuice appeared first on befores & afters.
    0 Comments 0 Shares 90 Views
  • BEFORESANDAFTERS.COM
    Fight Club is now 25 years old: we break down its stunning VFX
    A new VFX Notes episode is out! David Finchers glorious, mysterious, spectacular Fight Club has just turned 25! A new VFX Notes episode with Ian Failes and Hugo Guerra looks back at the film, and breaks down the incredible, invisible visual effects work. We dive deep into the photogrammetry side of things from Buf, and look at the variety of work from Digital Domain, the penguin from Blue Sky (!), plus VFX from other vendors. It was an extraordinary achievement from visual effects designer Kevin Tod Haug to oversee this work.Check out the video below which includes a whole range of behind the scenes and VFX breakdowns.The post Fight Club is now 25 years old: we break down its stunning VFX appeared first on befores & afters.
    0 Comments 0 Shares 67 Views
  • BEFORESANDAFTERS.COM
    Giving the chestburster its moment on screen
    How Alec Gillis orchestrated the intense effects for the chestburster, and more, on Alien: Romulus. An excerpt from befores & afters magazine in print.Creature effects designer Alec Gillis has a rich history with all things Alien.He worked on James Camerons Aliens as part of Stan Winstons creature crew, and later with his Amalgamated Dynamics, Inc. company on Alien3, Alien Resurrection, AvP: Aliens vs Predator and AVPR: Aliens vs Predator Requiem.With Fede lvarezs Alien: Romulus, Gillis has returned to the franchise via two new effects outfits.The first is Studio Gillis, through which he oversaw the chestburster effects and baby Offspring egg effects. Then theres Pro Machina, a practical effects outfit that built miniatures for the film.issue #22 Alien: RomulusDesigning a chestburster like no otherIn the film, a chestburster emerges from Navarro (Aileen Wu), which ultimately turns into a Xenomorph. When lvarez came to Gillis for the chestburster effect, it was soon apparent that the director was looking for something different than had been seen before. It was intended to be a lot less explosive than it has been, shares Gillis. I thought that was a really great way to ground the effect. Even though its not explosive, its more like a slow birth. Its a little more grisly, in a way. I liked that Fede was interested in giving this chestburster some time to live and breathe and be alive.I always want to be respectful of the original source material from Ridley Scott and from H.R. Giger, continues Gillis. Weve seen Gigers designs from the 79 film for the original chestburster. It was kind of like a turkeyhe had a different idea for it. But one of the unsung heroes in the chestburster lineage is Roger Dicken. He was the guy who actually built the chestburster. From what Ridley told me, he had a lot of design input as well. Those guys are all geniuses, so I was thinking, what are we going to do to build on the legacy? Well, Fede said that he felt like we had seen that violent, explosive, chestburster moment, and he wanted it to be more like a live birth.Concept artist Dane Hallett produced a number of designs for the chestburster. Danes work is very precise, applauds Gillis. Its very exact. And so when I saw his chestburster designand he also did the egg and the baby Offspring inside itI thought, Wow, this is such great material. We interpreted it in our way and brought the materials to it and the mechanism and Fedes ideas of how the chestburster should work.Studio Gillis then began work on a digital sculpt of the chestburster in ZBrush, spearheaded by concept artist Mauricio Ruiz. That 3D model went straight to my mechanical designer, David Penikas, who could start designing all the mechanisms in 3D, says Gillis. He didnt have to wait for a mold to be made or a core to create the skin. I impressed upon him that we needed to have a more fluid and more organic mechanism than weve had in the past. He did a terrific job of it, alongside Zac Teller, with the animatronics.An early consideration was size. Gillis points out that the chestbursters on Alien Resurrection were the same size as those in Alien and Aliens, but the desire on Romulus was to make the creature smaller. Partly this was because the actress, Wu, was more petite. In the end, our chestburster is about half the size of a traditional one. I actually thought, Well, that fits the story. If were compressing the time that it all happens, maybe this creature is like a premature birth. You see it in that beautiful shot that ILM did where she holds the X-ray wand behind her back and you see the size of the creature inside squirming.The director further requested the chestburster to color-shift during the scene. Gillis initially pushed back, suggesting that this could be something done digitally. But, he says, lvarez was looking to realize the effect practically. That meant we had to come up with a way of making the dome darken by injecting dark fluids underneath. We made a separate transparent dome on the top of the head and we injected a black fluid that would coarse under. We created a capillary system in the head so it wouldnt be just like a smooth gradation. I wanted a kind of grisly, membranous, cartilaginous, fleshy look out of this thing.Go even further on Gillis work in the print magazine.The post Giving the chestburster its moment on screen appeared first on befores & afters.
    0 Comments 0 Shares 64 Views
  • BEFORESANDAFTERS.COM
    Matt Estela: a life
    We chat to Matt Estela about Houdini, CG Wiki and his path from VFX intoa different world.Today on the podcast, were talking to Matt Estela. I wanted to talk to Matt about his path in the industry because its a fascinating one. Hes now a Senior Houdini Artist at Google. Before that, Matt has worked in many different areas in VFX and animation, including at Framestore CFC, Animal Logic, Dr. D Studios and several other places. In fact, we trace his career right back to his school daysand an Amiga 500.We also jump into what Matt has done with CG Wiki, his renowned resource for all things Houdini and a bunch of other CG-related things as well. I hope you enjoy the chat and I hope hearing about Matts career might inspire some of your own choices in VFX too.The post Matt Estela: a life appeared first on befores & afters.
    0 Comments 0 Shares 85 Views
  • BEFORESANDAFTERS.COM
    ILMs VFX breakdown for Deadpool & Wolverine is here
    Go behind the scenes.The post ILMs VFX breakdown for Deadpool & Wolverine is here appeared first on befores & afters.
    0 Comments 0 Shares 103 Views
  • BEFORESANDAFTERS.COM
    Watch Barnstorms VFX breakdown for The Sympathizer
    See how the VFX studio made their visual effects for the show.The post Watch Barnstorms VFX breakdown for The Sympathizer appeared first on befores & afters.
    0 Comments 0 Shares 102 Views
  • BEFORESANDAFTERS.COM
    Watch Folks Harold and The Purple Crayon breakdown reel
    Including the plane sequence and the bug.The post Watch Folks Harold and The Purple Crayon breakdown reel appeared first on befores & afters.
    0 Comments 0 Shares 89 Views
  • BEFORESANDAFTERS.COM
    3D Gaussian Splatting on Blender, In Its Truest Form
    KIRIs Blender add-on that allows for 3DGS integration is free and open-sourced.3D Gaussian Splatting has been a revolutionary 3D capturing and rendering technique, and has opened up infinite possibilities for the entire 3D industry.The revolutionary efficiency and fidelity of 3DGS (3D Gaussian Splatting) make it an excellent tool for quick visualizations, paired with real-time rendering, you have yourself something that restores reality effectively in 3D space from just a 2-minute something footage.With all the goods, come with the bads: the practicality of 3DGS has been quite limited as its a brand new algorithm and output of 3D format. The unique PLY file contains additional headers indicating the unique perspective-based rendering information, which makes it not directly compatible with most 3D editors people typically use.Figure 2: 3D Gaussian Splatting Visualization.Many developers see the potential of this new 3D visualization technique, finding solutions to make this unique file type compatible in popular 3D editors. So far, the most stable plugin remains to be the free XVERSE 3D-GS UE Plugin and UnityGaussianSplatting respectively for the Unreal Engine and Unity. While other plugins exist, such as ReshotAI/gaussian-splatting-add-on for Blender and UEGaussianSplatting for the Unreal Engine, one is of limited functionality and the other highly-priced.Understanding the demand for a high-quality and consistent add-on for 3DGS integration into Blender, the developers at KIRI Innovations have researched, produced, and polished a unique add-on that allows direct visualization for 3DGS within Blender. This new add-on combines versatility and consistency, allowing Blender enthusiasts to take advantage of gaussian splats and can render them into gorgeous final products.Figure 3: Splat Rendered with Eevee.The add-on can be found on either GitHub or Blender Market, and they are both completely free to download. Further optimizations will be added down the road, such as cropping tools, mesh converters, etc.To fully take advantage of 3DGS and the new add-on, KIRI encourages users to capture 3D gaussian splats within KIRI Engine, and use the in-app cropping tools to obtain the desired shape and parameters of the splat. Then, export its native PLY format for final rendering in Blender with the add-on installed.This new add-on brings a few unique versatile features that allow for more seamless integrations and practical usage. Its compatible with Eevee render, making this add-on the only tool that supports this render method. There is a Camera Update feature that allows the object to appear correctly as you switch perspectives, and there are two update modes: Frame Change and Continuous updates. Some of the processes may be straining on the hardware, so there is also a Color Warning feature that allows you to visualize the processing activities. This add-on also allows HQ (High Quality) splats rendering, as well as supporting multiple splats to be in the scene simultaneously.Figure 4: Full Add-on Menu.For more information, please refer to the dedicated user guide on KIRIs official website.The release of this add-on introduces an elevated experience when using 3D Gaussian Splatting within Blender, and the developers at KIRI Innovations believe that its only right to fully open-source this technology; this decision would encourage more artists and enthusiasts to implement into their workflow the most cutting-edge 3D visualization technique that is directly derived from the real world.Brought to you by KIRI Innovations:This article is part of the befores & afters VFX Insight series. If youd like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.The post 3D Gaussian Splatting on Blender, In Its Truest Form appeared first on befores & afters.
    0 Comments 0 Shares 122 Views
  • BEFORESANDAFTERS.COM
    Shooting on an LED volumeon film
    Magnopus details its particular virtual production approaches on Fallout, which included capturing the LED wall scenes on 35mm film. An excerpt from the print mag.The Prime Video post-apocalyptic series Fallout was shot on 35mm film with anamorphic lenses.While thats a format not unfamiliar at all to the shows executive producers Jonathan (Jonah) Nolan and Lisa Joywho took the same approach on Westworldit is a format not all that common for an episodic project that also relied heavily on shooting in an LED volume.Getting there required close collaboration with Magnopus, a studio that had been part of previous Westworld R&D efforts and some filming (on film) in an LED volume for that series fourth season.That season four of Westworld was where the evolution of this tech, integrated into storytelling, really began, advises AJ Sciutto, director of virtual production at Magnopus.Sciutto oversaw the effort at Magnopus to deliver and collaborate on several virtual production services for Fallout, including the virtual art department, LED volume operations and in-camera VFX (ICVFX). Fallouts visual effects supervisor was Jay Worth and visual effects producer was Andrea Knoll. The shows virtual production supervisors were Kathryn Brillhart and Kalan Ray, who oversaw four episodes of the series each. Magnopus ran stage operations for the first four episodes, with All of it Now handling the second lot of four episodes. More on how the film side of the production came into play below, but first the process began with the building of an LED volume in New York, where the series would be shooting.At the time, says Sciutto, there was not an LED volume in New York that could have accommodated a show this size. Spearheaded by productions Margot Lulick along with our partners at Manhattan Beach Studios and Fuse Technical Group, Magnopus CEO Ben Grossmann and the Magnopus team designed an LED volume in Long Island at Gold Coast Studios that was built to meet the specifications that Jonah wanted. He likes doing walk-and-talks, he likes being in longer shots, almost oners. He likes being able to be encompassed by immersive content. And so the design of the volume was very much a horseshoe shape. It wasnt cylindrical like you see in a lot of volumes now. It was a horseshoe to allow us a big long, flat section to do a walk and talk. The final LED wall size was 75 wide, 21 tall, and almost 100 long.The assets for the LED wallwhich included virtual sets for the underground vaults and post-apocalyptic Los Angeles environmentswere designed to run fully real-time in 3D using Epic Games Unreal Engine. We used the latest and greatest versions of Unreal at the time, states Sciutto. For the first couple episodes of the season, this was Unreal 4.27, and then we took a few months hiatus between the first four episodes and last four episodes and at that point Unreal upgraded to 5.1 and there were some advantages in using 5.1. Lumen was one of them, the real-time global illumination system, which we found to be pretty essential for the needs of the set designs that we were working with. And so we upgraded engine versions to Unreal 5.1 about a week before we actually shot the scenes using it, which can be a hive-inducing moment to anyone whos worked in this industry before. Epic says we were probably the first large production to actually use 5.1 in practice and it ended up working great for us.Making it work for filmWith the LED wall stage established and virtual art department builds underway, Magnopus still needed to solve any issues arising from the shooting of 35mm film on the volume. Sciutto notes that genlock was the most important factor. You have to be able to genlock the camera so that youre getting in sync with your refresh of the LEDs. We had worked with Keslow Camera back on Westworld to get sync boxes that are designed for the Arricam LT and Arricam ST to read a genlock signal and actually be able to phase lock the camera. That took a couple months of just designing the leading to trailing edge of the genlock signal for the camera to read that and get that to be in phase.Once we did a couple of camera tests, continues Sciutto, we felt like we were in a good state, but then we had to do some wedge tests because the actual latency flow between Unreal to the render nodes to the Brompton processors to the screen was slightly dynamic. We had to do some wedge tests to figure out what that latency offset was so we could then dial in the camera.The next hurdle was color workflow. Normally, says Sciutto, you build in a color profile for your camera, but because the HD tap on the film camera is not truly an HD tap, you are winging it. Well, youre not actually winging it. Theres a lot of science behind it in terms of what youre looking at in dailies and how youre redialing the wall and how youre redialing to a digital camera. You cant really trust what youre seeing out of the HD tap. So we had a Sony Venice that was sitting on sticks right next to the film camera. We had a LUT applied to the digital camera that mimicked our film camera so that we could do some live color grading to the overall image of the wall.Sciutto adds a further challenge was understanding the nature of different results from the film lab in terms of dailies. Depending on which day of the week we got the dailies processed, they might change the chemical bath on Mondays, so by the end of the week it might skew a little bit more magenta or might skew more green. We would use the digital camera footage to know we were always within a very comfortable range.That dailies processwhich saw rushes shipped from New York to Burbank for development and digitizationalso impacted the pre-light on the LED wall, as Sciutto explains. When you do a pre-light day on the film camera, you dont really know what you shot until a day and a half after. So we would do a series of pre-light shoots where we would shoot on a day, get the film developed, have a day to review and make any adjustments to our content before we did another pre-light day. That created a schedule for us that allowed set dec to get in there and do some adjustments to the scene, or our virtual art department to do lighting adjustments to the virtual content to make sure it lined up with the physical content and be ready for any of the lighting and color settings we needed to be set up for on the actual shoot day.Asked about the final look of the LED wall film footage, Sciutto mentions that seeing the finished results in dailies, there was definitely a softer fall-off between your foreground actors, your mid-ground set pieces and your background virtual content. That transition was blended a little bit smoother through the grittiness of the film [compared to digital] and that helped a lot. Also, you can capture a lot more detail and range in film that allows for more dynamic offsetting through color in a post process than you can with digital. If you go digital and then you push it or you crank it too much, it can get weird somewhat quickly. So shooting on film at least allowed us more range to dial in during the DI.Read the full story in issue #21 of befores & afters magazine.The post Shooting on an LED volumeon film appeared first on befores & afters.
    0 Comments 0 Shares 120 Views
  • BEFORESANDAFTERS.COM
    Shooting on an LED volumeon film
    Magnopus details its particular virtual production approaches on Fallout, which included capturing the LED wall scenes on 35mm film. An excerpt from the print mag.The Prime Video post-apocalyptic series Fallout was shot on 35mm film with anamorphic lenses.While thats a format not unfamiliar at all to the shows executive producers Jonathan (Jonah) Nolan and Lisa Joywho took the same approach on Westworldit is a format not all that common for an episodic project that also relied heavily on shooting in an LED volume.Getting there required close collaboration with Magnopus, a studio that had been part of previous Westworld R&D efforts and some filming (on film) in an LED volume for that series fourth season.That season four of Westworld was where the evolution of this tech, integrated into storytelling, really began, advises AJ Sciutto, director of virtual production at Magnopus.Sciutto oversaw the effort at Magnopus to deliver and collaborate on several virtual production services for Fallout, including the virtual art department, LED volume operations and in-camera VFX (ICVFX). Fallouts visual effects supervisor was Jay Worth and visual effects producer was Andrea Knoll. The shows virtual production supervisors were Kathryn Brillhart and Kalan Ray, who oversaw four episodes of the series each. Magnopus ran stage operations for the first four episodes, with All of it Now handling the second lot of four episodes. More on how the film side of the production came into play below, but first the process began with the building of an LED volume in New York, where the series would be shooting.At the time, says Sciutto, there was not an LED volume in New York that could have accommodated a show this size. Spearheaded by productions Margot Lulick along with our partners at Manhattan Beach Studios and Fuse Technical Group, Magnopus CEO Ben Grossmann and the Magnopus team designed an LED volume in Long Island at Gold Coast Studios that was built to meet the specifications that Jonah wanted. He likes doing walk-and-talks, he likes being in longer shots, almost oners. He likes being able to be encompassed by immersive content. And so the design of the volume was very much a horseshoe shape. It wasnt cylindrical like you see in a lot of volumes now. It was a horseshoe to allow us a big long, flat section to do a walk and talk. The final LED wall size was 75 wide, 21 tall, and almost 100 long.The assets for the LED wallwhich included virtual sets for the underground vaults and post-apocalyptic Los Angeles environmentswere designed to run fully real-time in 3D using Epic Games Unreal Engine. We used the latest and greatest versions of Unreal at the time, states Sciutto. For the first couple episodes of the season, this was Unreal 4.27, and then we took a few months hiatus between the first four episodes and last four episodes and at that point Unreal upgraded to 5.1 and there were some advantages in using 5.1. Lumen was one of them, the real-time global illumination system, which we found to be pretty essential for the needs of the set designs that we were working with. And so we upgraded engine versions to Unreal 5.1 about a week before we actually shot the scenes using it, which can be a hive-inducing moment to anyone whos worked in this industry before. Epic says we were probably the first large production to actually use 5.1 in practice and it ended up working great for us.Making it work for filmWith the LED wall stage established and virtual art department builds underway, Magnopus still needed to solve any issues arising from the shooting of 35mm film on the volume. Sciutto notes that genlock was the most important factor. You have to be able to genlock the camera so that youre getting in sync with your refresh of the LEDs. We had worked with Keslow Camera back on Westworld to get sync boxes that are designed for the Arricam LT and Arricam ST to read a genlock signal and actually be able to phase lock the camera. That took a couple months of just designing the leading to trailing edge of the genlock signal for the camera to read that and get that to be in phase.Once we did a couple of camera tests, continues Sciutto, we felt like we were in a good state, but then we had to do some wedge tests because the actual latency flow between Unreal to the render nodes to the Brompton processors to the screen was slightly dynamic. We had to do some wedge tests to figure out what that latency offset was so we could then dial in the camera.The next hurdle was color workflow. Normally, says Sciutto, you build in a color profile for your camera, but because the HD tap on the film camera is not truly an HD tap, you are winging it. Well, youre not actually winging it. Theres a lot of science behind it in terms of what youre looking at in dailies and how youre redialing the wall and how youre redialing to a digital camera. You cant really trust what youre seeing out of the HD tap. So we had a Sony Venice that was sitting on sticks right next to the film camera. We had a LUT applied to the digital camera that mimicked our film camera so that we could do some live color grading to the overall image of the wall.Sciutto adds a further challenge was understanding the nature of different results from the film lab in terms of dailies. Depending on which day of the week we got the dailies processed, they might change the chemical bath on Mondays, so by the end of the week it might skew a little bit more magenta or might skew more green. We would use the digital camera footage to know we were always within a very comfortable range.That dailies processwhich saw rushes shipped from New York to Burbank for development and digitizationalso impacted the pre-light on the LED wall, as Sciutto explains. When you do a pre-light day on the film camera, you dont really know what you shot until a day and a half after. So we would do a series of pre-light shoots where we would shoot on a day, get the film developed, have a day to review and make any adjustments to our content before we did another pre-light day. That created a schedule for us that allowed set dec to get in there and do some adjustments to the scene, or our virtual art department to do lighting adjustments to the virtual content to make sure it lined up with the physical content and be ready for any of the lighting and color settings we needed to be set up for on the actual shoot day.Asked about the final look of the LED wall film footage, Sciutto mentions that seeing the finished results in dailies, there was definitely a softer fall-off between your foreground actors, your mid-ground set pieces and your background virtual content. That transition was blended a little bit smoother through the grittiness of the film [compared to digital] and that helped a lot. Also, you can capture a lot more detail and range in film that allows for more dynamic offsetting through color in a post process than you can with digital. If you go digital and then you push it or you crank it too much, it can get weird somewhat quickly. So shooting on film at least allowed us more range to dial in during the DI.Read the full story in issue #21 of befores & afters magazine.The post Shooting on an LED volumeon film appeared first on befores & afters.
    0 Comments 0 Shares 135 Views
  • BEFORESANDAFTERS.COM
    The making of Call of the Kings
    Call Of The Kings promotes Riyadh Seasons Six Kings Slam, a one-of-its-kind tennis tournament. Heres how Electric Theatre Collective made it.The post The making of Call of the Kings appeared first on befores & afters.
    0 Comments 0 Shares 115 Views
  • BEFORESANDAFTERS.COM
    How Cinesite made the three-headed dog Fotis for Kaos
    Watch their breakdown reel here. Read Cinesites case study, tooThe post How Cinesite made the three-headed dog Fotis for Kaos appeared first on befores & afters.
    0 Comments 0 Shares 112 Views
  • BEFORESANDAFTERS.COM
    Behind the scenes of Blizzards Diablo IV: Vessel of Hatred trailer
    Including how a photographed chicken carcass gave the team inspiration for the look inside a titan.Here at befores & afters we regularly cover the making of visual effects and animation in films and episodic. But, what about game trailers and cinematics? How are those kinds of storytelling pieces created, and what goes into their design and execution?Recently, Blizzard Entertainment released a trailer for its upcoming Diablo IV Vessel of Hatred expansion launching on October 8. In it, we get a glimpse of the fate of Neyrelle while in the clutches of Mephisto.To find out how the trailer was made, befores & afters spoke with Blizzard DFX supervisor Ashraf Ghoniem, who is part of the studios Story and Franchise Development (SFD) team.Join us as we break down the story, art and tech of this Diablo IV trailer, including the tools and approaches utilized by Blizzard, and how a photographed chicken carcass offered some incredible reference for the trailers gorier parts.Getting started on the trailerThe Story and Franchise Development group works closely with the Blizzard game team on game trailers, with SFD looking to help set up the release of the Diablo IV Vessel of Hatred expansion. In working out what to feature in the trailer, Ghoniem shares that the process is similar to what might happen at an animation studio in terms of an initial script writing phase.Board to final.We start with a bunch of pitches to the game team, says Ghoniem. We work really tightly with them on what the story is theyre trying to tell in their game, and we try to bring that into the cinematic version. We start getting right into a script. We get into beat boards, then we go into storyboarding. We do full storyboards, previs, and then into animation.One early challenge for the SFD team at this stage is that if the game team moves in a different direction story-wise during game development, so too must the approach to any trailers. If something really changes in their game in the middle of our planning we also have to pivot, notes Ghoniem. Crafting a characterIncredibly, the characters featured in the trailer are all hand-animated, that is, they were brought to life without the use of motion capture. In building the characters, the SFD team used two full FACS (Facial Action Coding System) scans and relied on a number of off-the-shelf tools for delivering the character work, including ZBrush, Maya, Mari, Katana and RenderMan, with aspects like cloth and hair dealt with in Houdini.Something were pushing for on this piece is to have a ground truth that we can always reference to, advises Ghoniem. For example, we shot an HDRI of the actual location where the character was standing, and wed put the footage of the real person there and then our CG model in the same place to make sure that we nailed the look. *We shot references of the actresses and an HDRI in the same place as to have a ground truth to compare our assets against using said HDRI. (trying to make it more clear?)Were trying to make it more plausible anatomically, which is why the scanning was very useful, adds Ghoniem.Neyrelle model.Ghoniem continues: The whole point was that we didnt want to have discussions that are just feeling-based about how the characters look. We didnt want to say, I think the subsurface is not quite right. I think the diffuse is not doing the right things. The color feels a little off. We would always put our CG character right next to the real photography under the same lighting conditions since we had the parity available to us via our reference and HDRI.A very noticeable aspect of the trailer is the attention to lighting on the characters faces. This element began with the building of a color script by the art director, who would provide still frame draw-overs.The hope was when we started in lighting that wed have a really solid idea of where we wanted to go, even to the fact of where the shadows hit on the face, describes Ghoniem. We wanted to use lighting to bring that feeling in, of her being alone in certain moments, and then making her feel over exposed and hot for other moments. All that was very deliberate.FX challenges: water and splitting handsA large portion of the trailer takes place in a canoe on water, which Blizzard of course had to simulate. Here, says, Ghoniem, we went as far as we could with the FLIP solver in Houdini as we could to really push the quality to make sure that the sims were looking as high-res as possible. We used a PDG (procedural dependency graph) workflow to run shots out across sequences.Artwork for Neyrelle in flayed form.Ghoniem felt like the small details in the water, such as debris, helped sell the fluid sims. That small stuff adds up to look very complex, but not much more budget. It actually made the piece look a lot more realistic. Those are the things we chased, making sure we got the debris and the scum and all the little stuff that moves in the water as realistically as possible.Meanwhile, Houdini simulations came into play in an even more complicated way for moments in the trailer when Neyrelle is shown in Mephistos mind-lair, and her arms being split.Artwork for Vhenards eerie transformation.That was an interesting one because we had a storyboard that everybody loved that was very super graphic and very interesting and had a lot going on in it, recalls Ghoniem. Originally, we were going over the top with the gore on it. We had a very realistic rendering of the blood and skin stretching, but we found that it didnt quite plate. It felt weird. It was like, where does that thing come from? It was as if, youre trying to think too much about the anatomy of the split.So, says Ghoniem, we had this idea, Lets just make it thicker with a lot of black in there. We made it like her body was being turned into Ichor, which is the black goo. Then when it splits, youd see the ichor react.Inside a chickenIndeed, those scarier moments in the trailer in which Neyrelle appears strung up inside a Titan provided the SFD team with an opportunity to be a little more out there with the design, especially in terms of gore, scale and lighting. Finding reference for these moments was particularly challenging, relates Ghoniem, who says that the art director ultimately found a compelling resource to base the look off of: the inside of a chicken.Mephisto chamber artwork.What he found was some great references of a chicken opened up. Somebody had taken a bunch of pictures of a chicken carcass with different lights. You could see how the subsurface reacted and how different parts of the chicken reacted to light. It was beautiful. That became one of our big references and really helped, because we were struggling for a little while and that helped us get over the edge. It helped bring everything into reality, at least a little bit.Director: Doug GregoryArt director: Anthony EftekhariDFX supervisor: Ashraf GhoniemCinematic Producer: Alex KellerCinematic Editor: Adam RickabusThe post Behind the scenes of Blizzards Diablo IV: Vessel of Hatred trailer appeared first on befores & afters.
    0 Comments 0 Shares 145 Views
More Stories