WWW.FXGUIDE.COM
A deep dive into the filmmaking of Here with Kevin Baillie
The film Heretakes place in a single living room, with a static camera, but the film is anything but simple. The film remains faithful to the original graphic novel by Richard McGuire on which it is based, Tom Hanks and Robin Wright star in a tale of love, loss, and life, along with Paul Bettany and Kelly Reilly.Robert Zemeckis directing the filmRobert Zemeckis directed the film. The cinematography was by Don Burgess, and every shot in the film is a VFX shot. On the fxpodcast, VFX supervisor and second unit director Kevin Baillie discusses the complex challenges of filming, editing, and particularly de-aging the well-known cast members to play their characters throughout their adult lifespans.A monitor showing the identity detection that went into making sure that each actors younger real-time likeness was swapped onto them, and only them.DeAgingGiven the quantity and emotional nature of the performances, and the vast range of years involved, it would have been impossible to use traditional CGI methods and equally too hard to do with traditional makeup. The creative team decided that AI had just advanced enough to serve as a VFX tool, and its use was crucial to getting the film greenlit. Baillie invited Metaphycis to do a screen test for the project in 2022, recreating a young Tom Hanks, reminiscent of his appearance in Big, while maintaining the emotional integrity of his contemporary performance. A team of artists used custom neural networks to test Tom Hanks D-Age to His 20s. That gave the studio and our filmmaking team confidence that the film could be made. Interestingly, as Baillie discusses in the fxpodcast, body doubles were also tested but did not work nearly as well as the original actors.Tests of face swapping by Metaphysics. Early test of methods for de-aging Tom based on various training datasets:https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_ageEvolutionOptions.mp4Neural render output test clip:https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_WIP.mp4Final comp test clip: (The result of test for de-aging Tom that helped green-light the film)https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_Final.mp4While the neural network models used for the outputs generated remarkable photoreal results, but they still required skilled compositing to match, especially on dramatic head turns. Metaphysic artists enhanced to AI to hold up to the films cinematic 4K standards. Metaphysics also developed new tools for actor eyeline control and other key crafting techniques. Additionally, multiple models were trained for each actor to meet the diverse needs of the film; Hanks is portrayed at five different ages, Wright at four ages, and Bettany and Riley at two ages each. Achieving this through traditional computer graphics techniques involving 3D modeling, rendering, and facial capture, would have been impossible given the scale and quality required forHere and the budget for so much on-screen VFX. The film has over 53 minutes of complete face replacement work, done primarily by Metaphysics, led by Metaphysics VFX Supervisor, Jo Plaete. Metaphysics proprietary process involves training a neural network model on a reference input, in this case, footage and images of a younger Hanks, with artist refinement of the results until the model is ready for production. From there, an actor or performer can drive the model, both live on set and in a higher quality version in Post. The results are exceptional and well beyond what traditional approaches have achieved.On Set live preview: Tom de-aged as visualized LIVE on set (right image) vs the raw camera feed (left image)For principal photography, the team needed a way to ensure that the age of the actors body motion matched the scripted age of their on-screen characters. To help solve this, the team deployed a real-time face-swapping pipeline in parallel on set, one monitor showing the raw camera feed and the other the actors visualized in their 20s (with about a six-frame delay). These visuals acted as a tool for the director and the actors to craft performances. As you can hear in the podcast it also allowed a lot more collaboration with other departments such as hair and makeup, and costume.The final result was a mix of multiple AI neural renders and classic nuke compositing. The result is a progression of the actors through their years, designed to be invisible to audiences.Robin with old-age makeup, compared with synthesized images of her at her older age, which were used to improve the makeup using similar methods to the de-aging done in the rest of the filmIn addition to de-aging, similar approaches were used to improve the elaborate old-age prosthetics worn by Robin Wright at the end of the film. , This allowed enhanced skin translucency and fine wrinkles, etc. De-aging makeup is extremely difficult and often characterised as the hardest special effects makeup to attempt. Metaphysics has done an exceptional job combining actual makeup with digital makeup to produce a photorealistic de-aging.In addition to the visuals, Respeecher and Skywalker Sound also deaged the actors voices, as Baillie discusses in the fxpodcast.Three setsThe filming was done primarily on three sets. There were two identical copies of the room to allow one to be filmed while the other was being dressed for the correct era. Additionally, exterior scenes from before the house was built were filmed on a separate third soundstage.Graphic PanelsGraphic panels serve as a bridge across millions of years from one notionally static perspective. The graphic panels that transitioned between eras were deceivingly tricky, with multiple scenes often playing on-screen simultaneously. As Baillie explains on the podcast, they had to reinvent editorial count sheets and use a special in-house comp team with After Effects as part of the editorial process. LED WallAn LED wall with content from the Unreal Engine was used outside the primary window. As some background needed to be replaced, the team also used the static camera to shoot helpful motion-control style matte passes (the disco passes).The Disco passesFor the imagery in the background Baille knew that it would take a huge amount of effort to add the fine detail needed in the Unreal Engine. He liked the UE output but we wanted a lot of fine detail for the 4K master. Once the environment artists had made their key creative choices, one of the boutique studios and the small in-house team used an AI power tool called Magnific to up-res the images. Magnific was was built by Javi Lopez (@javilopen) and Emilio Nicolas (@emailnicolas), two indie entrepreneurs, and it uses AI to infer additional detail. The advanced AI upscaler & enhancer effectively reimagine much of the details in the image, guided by a prompt and parameters.Before Left After RightMagnific allowed for an immense amount of high-frequency detail that would have been very time-consuming to add traditionally.Here, it has not done exceptionally well at the box office, ( and as you will hear in the next fxguide VFXShow podcast, not everyone liked the film), but there is no doubt that the craft of filmmaking and the technological advances are dramatic. Regardless of any plot criticisms, the film stands as a testament to technical excellence and innovation in the field. Notably, the production respected data provenance in its use of AI. Rather than replacing VFX artists, AI was used to complement their skills, empowering an on-set and post-production team to bring the directors vision to life. While advances in AI can be concerning, in the hands of dedicated filmmakers, these tools offer new dimensions in storytelling, expanding whats creatively possible.
0 Comments
0 Shares
103 Views