


Professional Honorary Organization
11 people like this
77 Posts
2 Photos
0 Videos
0
Reviews
Share
Share this page
Recent Updates
-
VES AWARD WINNERSwww.vfxvoice.comKINGDOM OF THE PLANET OF THE APESThe VES Award for Outstanding Visual Effects in a Photoreal Feature went to Kingdom of the Planet of the Apes. (Photos courtesy of Walt Disney Studios)THE WILD ROBOTOutstanding Visual Effects in an Animated Feature went to The Wild Robot, which won four VES Awards including Outstanding Animated Character in an Animated Feature (Roz), Outstanding Created Environment in an Animated Feature (The Forest) and Outstanding Effects Simulations in an Animated Feature. (Photos courtesy of DreamWorks Animation and Universal Pictures)SHOGUNOutstanding Visual Effects in a Photoreal Episode went to Shgun; Anjin, which won three VES Awards including Outstanding Effects Simulations in an Episode, Commercial, Game Cinematic or Real-Time Project (Broken to the Fist; Landslide) and Outstanding Created Environment in an Episode, Commercial or Real-Time Project (Osaka). (Photos courtesy of FX Network)0 Comments ·0 Shares ·8 Views
-
VISUAL EFFECTS ARTISTRY IN THE SPOTLIGHTwww.vfxvoice.comCaptions list all members of each Award-winning team even if some members of the team were not present or out of frame. For more Show photosand a complete list of nominees and winners of the 23rd Annual VES Awards, visit vesglobal.org.All photos by Moloshok Photography.A Night to Remember. Nearly 1,200 guests filled The Beverly Hilton on February 11th, coming together to honor the best in VFX across 25 award categories celebrating innovation, artistry and cinematic magic.A Warm Welcome. Nancy Ward, Executive Director of the Visual Effects Society, took the stage with a heartfelt greeting, celebrating an incredible year for the VES and the groundbreaking achievements of the visual effects community.Kicking Off the Celebration. Kim Davidson, founder of SideFX and Chair of the Visual Effects Society, took the stage to present the first round of awards, setting the tone for an unforgettable night of VFX excellence.On February 11, the VES hosted the 23rdAnnual VES Awards, the prestigious yearly celebration that recognizes outstanding visual effects artistry and innovation in film, animation, television, commercials, video games and special venues.Industry guests gathered at The Beverly Hilton hotel to celebrate VFX talent in 25 awards categories and special honorees. Kingdom of the Planet of the Apes received the top photoreal feature award. The Wild Robot was named top animated film, winning four awards. Shgun; Anjin was named best photoreal episode, winning three awards.The Sklar Brothers (Randy and Jason) brought their signature wit and energy to their debut as VES Awards hosts.Comedy duo The Sklar Brothers made their debut as VES Awards hosts. Acclaimed actor Keanu Reeves presented Golden Globe-winning actor-producer Hiroyuki Sanada with the VES Award for Creative Excellence. Chief Research Officer of Eyeline Studios Paul Debevec, VES, presented Virtual Reality/Immersive Technology Pioneer Dr. Jacquelyn Ford Morie with the Georges Mlis Award. Writer-director Michael Dougherty presented Academy Award-winning filmmaker and VFX Supervisor Takashi Yamazaki with the Visionary Award. Award presenters included: Kelvin Harrison, Jr., Krys Marshall, Mary Mouser, Russell Hornsby, Tanner Buchanan, Eric Winter, Tia Carrere, and Autodesks Senior Director, Business Strategy Rachael Appleton presented the VES-Autodesk Student Award.As we celebrate the 23rd Annual VES Awards, were honored to shine a light on outstanding visual effects artistry and innovation, said VES Chair Kim Davidson. The honorees and their work represent best-in-class visual effects work that engages audiences and enhances the art of storytelling. The VES Awards is the only venue that showcases and honors these outstanding global artists across a wide range of disciplines, and we are extremely proud of all our winners and nominees.Kingdom of the Planet of the Apes won the Award for Outstanding Visual Effects in a Photoreal Feature, led by the team of Erik Winquist, Julia Neighly, Paul Story, Danielle Immerman and Rodney Burke.The Award for Outstanding Visual Effects in an Animated Feature went to The Wild Robot and the team of Chris Sanders, Jeff Hermann, Jeff Budsberg and Jakob Hjort Jensen.The Award for Outstanding Visual Effects in a Photoreal Episode went to Shgun; Anjin and the team of Michael Cliett, Melody Mead, Philip Engstrm, Ed Bruce and Cameron Waldbauer.Cobra Kai stars Tanner Buchanan and Mary Mouser brought energy and charm to the stage, acting as an engaging duo of presenters.The Award for Outstanding Supporting Visual Effects in a Photoreal Episode went to The Penguin; Bliss and the team of Johnny Han, Michelle Rose, Goran Pavles, Ed Bruce and Devin Maggio.The Award for Outstanding Supporting Visual Effects in a Photoreal Feature went to Civil War and the team of David Simpson, Michelle Rose, Freddy Salazar, Chris Zeh and J.D. Schwalm.Lisa Cooke, former VES Chair and the first woman ever elected to the role, took the stage to present multiple VES Awards.The Award for Outstanding Visual Effects in a Commercial went to Coca-Cola; The Heroes and the team of Greg McKneally, Antonia Vlasto, Ryan Knowles and Fabrice Fiteni.The Award for Outstanding Character in a Photoreal Feature went to Better Man; Robbie Williams and the team of Milton Ramirez, Andrea Merlo, Seoungseok Charlie Kim and Eteuati Tema.Actress and singer Tia Carrere took the stage to present several awards, bringing charm, energy and star power to the celebration.The Award for Outstanding Visual Effects in a Special Venue Project went to D23; Real-Time Rocket and the team of Evan Goldberg, Alyssa Finley, Jason Breneman and Alice Taylor.The Award for Outstanding Character in an Animated Feature went to The Wild Robot; Roz and the team of Fabio Lignini, Yukinori Inagaki, Owen Demers and Hyun Huh.The Award for Outstanding Character in an Episode, Commercial, Game Cinematic or Real-Time Project went to Ronja the Robbers Daughter; Vildvittran the Queen Harpy and the team of Nicklas Andersson, David Allan, Gustav hren and Niklas Walln.The Award for Outstanding Environment in a Photoreal Feature was won by Dune: Part Two; The Arrakeen Basin and the team of Daniel Rhein, Daniel Anton Fernandez, Marc James Austin and Christopher Anciaume.The Award for Outstanding Environment in an Episode, Commercial, Game Cinematic or Real-Time Project went to Shgun; Osaka and the team of Manuel Martinez, Phil Hannigan, Keith Malone and Francesco Corvino.Liv Hewson, star of Yellowjackets, was a captivating presenter in celebrating VFX, as she bestowed multiple awards.The Award for Outstanding Environment in an Animated Feature was presented to The Wild Robot; The Forest and the team of John Wake, He Jung Park, Woojin Choi and Shane Glading.The Award for Outstanding Visual Effects in a Real-Time Project went to Star Wars Outlaws and the team of Stephen Hawes, Lionel Le Dain, Benedikt Podlesnigg and Andi-Bogdan Draghici. Presenter Kelvin Harrison, Jr. accepted the Award.The Award for Outstanding Effects Simulations in a Photoreal Feature went to Dune: Part Two; Atomic Explosions and Wormriding and the team of Nicholas Papworth, Sandy la Tourelle, Lisa Nolan and Christopher Phillips.Krys Marshall, known for her role in For All Mankind, lit up the stage with her charisma and grace as she presented multiple awards.The Award for Outstanding CG Cinematography went to Dune: Part Two; Arrakis and the team of Greig Fraser, Xin Steve Guo, Sandra Murta and Ben Wiggs.The Award for Outstanding Model in a Photoreal or Animated Project went to Alien: Romulus; Renaissance Space Station and the team of Waldemar Bartkowiak, Trevor Wide, Matt Middleton and Ben Shearman.The Award for Outstanding Effects Simulations in an Animated Feature went to The Wild Robot and the team of Derek Cheung, Michael Losure, David Chow and Nyoung Kim.The Award for Outstanding Effects Simulations in an Episode, Commercial, Game Cinematic or Real-Time Project was won by Shgun; Broken to the Fist; Landslide and the team of Dominic Tiedeken, Heinrich Lwe, Charles Guerton and Timmy Lundin.The Award for Outstanding Compositing & Lighting in a Feature went to Dune: Part Two; Wormriding, Geidi Prime and the Final Battle and the team of Christopher Rickard, Francesco DellAnna, Paul Chapman and Ryan Wing.Actor Russell Hornsby, featured in The Woman in the Yard, showcased his charisma and love for VFX as a standout presenter.The Award for Outstanding Special (Practical) Effects in a Photoreal Project went to The Penguin; Safe Guns and the team of Devin Maggio, Johnny Han, Cory Candrilli and Alexandre Prodhomme.The Award for Outstanding Compositing & Lighting in an Episode was presented to The Penguin; After Hours and the team of Jonas Stuckenbrock, Karen Cheng, Eugene Bondar and Miky Girn.The Award for Outstanding Compositing & Lighting in a Commercial went to Coca-Cola; The Heroes and the team of Ryan Knowles, Alex Gabucci, Jack Powell and Dan Yargici.The Emerging Technology Award went to Here; Neural Performance Toolset and the team of Jo Plaete, Oriel Frigo, Tomas Koutsky and Matteo Olivieri-Dancey.Championing the Next Generation. Rachael Appleton, Autodesks Senior Director of Business Strategy, took the stage to present the prestigious VES Autodesk Student Award, honoring emerging talent in the world of VFX.Keanu Reeves honors his friend, award-winning actor-producer Hiroyuki Sanada, and presents him with the distinguished VES Award for Creative Excellence, a tribute to his extraordinary contributions to film, VFX and storytelling.Paul Debevec, VES, Chief Research Officer at Eyeline Studios, presents Virtual Reality and Immersive Technology pioneer Dr. Jacquelyn Ford Morie with the VES Georges Mlis Award, recognizing her trailblazing contributions to immersive storytelling.The Award for Outstanding Visual Effects in a Student Project went to Pittura (entry from ARTFX, The Schools of Digital Arts) and the team of Adam Lauriol, Titouan Lassre, Rmi Vivenza and Hellos Marre.Golden Globe-winning actor-producer Hiroyuki Sanada, one of Japans most celebrated actors, proudly holds his VES Award for Creative Excellence. Known for his unforgettable portrayal of Lord Toranaga in the acclaimed series Shgun, Sanada continues to leave an indelible mark on cinema and television.Virtual Reality/Immersive Technology Pioneer Dr. Jacquelyn Ford Morie was honored with the VESs prestigious Georges Mlis Award.Writer-director Michael Dougherty presented Academy Award-winning filmmaker and VFX Supervisor Takashi Yamazaki with the VES Visionary Award.Hosts The Sklar Brothers share a laugh backstage with award presenter Tia Carrere.Takashi Yamazaki, recipient of the Visionary Award, is one of Japans leading filmmakers and VFX supervisors, earned the 2024 Academy Award for Best Visual Effects for his groundbreaking work on Godzilla Minus One.The celebrated group of Keanu Reeves, VES Executive Director Nancy Ward, Takashi Yamazaki, Hiroyuki Sanada and VES Chair Kim Davidson share a star-studded backstage moment, radiating talent and camaraderieThe dedicated volunteers of the VES Awards Committee Prashant Agrawal, Stephen Chiu, Olun Riley, Michael Ramirez, Kathryn Brillhart, Dave Gouge, Sarah McGrail, Den Serras (Chair), Lopsie Schwartz (Co-Chair), Scott Kilburn (Co-Chair), Eric Greenlief, Dylen Velasquez, Rob Blau, Pramita Mukherjee, Sarah McGee, Brad Simonsen turned their hard work into a spectacular evening celebrating the VES and the best in VFX.0 Comments ·0 Shares ·10 Views
-
DR. JACQUELYN FORD MORIE: PIONEERING IMMERSIVE TECHNOLOGYwww.vfxvoice.comBy NAOMI GOLDMANAll photos by Moloshok Photography.Paul Debevec, VES, Chief Research Officer of Eyeline Studios, presents Dr. Jacquelyn Ford Morie with the Georges Mlis Award.Dr. Jacquelyn Ford Morie is a luminary in the realm of virtual reality and a defining voice of immersive technology. Standing at the intersection of art, science and technology, she has been shaping the future of virtual experiences worldwide since the 1980s, and remains dedicated to mentoring the next generation of VR innovators while forging ahead with new paradigms in immersive media.For her immense contributions to the computer animation and visual effects industries, by way of artistry, invention and groundbreaking work, the Society honored Dr. Morie with the Georges Mlis Award at the 23rd Annual VES Awards. Dr. Jacki Morie redefined the possibilities of immersive media, infusing them with profound emotional and educational resonance, said VES Chair Kim Davidson. Her visionary leadership has shaped pivotal projects across health, space exploration, art and education, underscoring her as a transformative innovator and thought leader.Chief Research Officer of Eyeline Studios Paul Debevec, VES gave this heartfelt introduction to his longtime friend: I had the privilege of working with Jacki for over a decade at USCs Institute for Creative Technologies. We were two of the first research leaders hired in 2000, and I was lucky to have a colleague who helped set the tone that this institute would not just be about developing technology, but also about exploring its creative possibilities.Jacki has developed computer animation training programs that have educated a generation of digital artists. But perhaps her most impactful contributions have been pioneering work in VR experiences, creating new forms of multisensory feedback, likehelping NASA astronauts combat isolation on long-duration space missions by keeping them emotionally connected to Earth. It is a theme of Jackis work to develop technology and experiences to make life better for others.In accepting her award, Morie remarked, To say I am honored to receive this award is hardly representative of what I am feeling. It is not everyone who gets to be at the birth of a new medium, and yet here tonight we celebrate both film and computer graphics and effects. I have spent my entire career trying to make thisnew medium of VR more meaningful, more emotional, more purposeful. Id like to think that my work also honors those who are trying to follow the desire lines of where we want to take this medium. I am glad to have started this journey and look forward to many more artists taking it up and making it truly something spectacular! My sincere thanks to the VES for recognizing me and the birth of this new art form.Dr. Morie with the Georges Mlis Award.Dr. Morie and Rob Smith.Paul Debevec, VES, Dr. Morie, VES Executive Director Nancy Ward and VES Chair Kim Davidson.0 Comments ·0 Shares ·8 Views
-
TAKASHI YAMAZAKI: REIGNITING THE MOVIE MONSTER GENREwww.vfxvoice.comBy NAOMI GOLDMANAll photos by Moloshok Photography.Writer-director Michael Dougherty presents Takashi Yamazaki with the VES Visionary Award.Takashi Yamazaki with the VES Visionary Award.Yamazaki and Hiroyuki Sanada, recipient of the VES Award for Creative Excellence.Takashi Yamazaki is a renowned talent in Japanese cinema who accomplished a significant feat when he became only the second director to win an Academy Award for Best Visual Effects for Godzilla Minus One and in the process reinvigorated a legendary kaiju franchise. As a filmmaker and visual effects supervisor, he is regarded as one of Japans leading film directors. Yamazaki is set to make his Hollywood debut with Grandgear for Bad Robot and Sony Pictures, and recently announced that he is working on the screenplay and storyboards for the much-anticipated next Godzilla movie.For his consummate artistry, expansive storytelling and profound ability to use visual effects to bring his unique visions to life, the Society honored Takashi Yamazaki with the VES Visionary Award at the 23rd Annual VES Awards. Takashi has been at the forefront in using visual effects to tell remarkable stories that transfix audiences and create unforgettable cinematic experiences, said VES Chair Kim Davidson. As a creative force who has made an indelible mark in the world of filmed entertainment, we are honored to award him with the prestigious VES Visionary Award.Michael Dougherty, director of Godzilla: King of the Monsters, gave this tribute in presenting the award to his colleague in the kaiju genre: Takashi is a filmmaker whose work is both humbling and inspiring. He was so moved by Star Wars and Close Encounters that he started his career building miniatures and working tirelessly asa visual effects supervisor before directing his own films. Takashis work pushes the boundaries of visual effects, blending technical innovation and compelling storytelling to create immersiveand iconic films. Godzilla Minus One resurrected the king of the monsters. Yes, Godzilla has a soul, and Takashi captured it in such a way that moved the world, earning Japan its first Academy Award for Visual Effects.In accepting his award, Yamazaki remarked, Im truly surprised and overjoyed to receive such a wonderful award. I started this career with a dream of working with people around the world like a pioneer in visual effects; however, in Japan, there were hardly any opportunities for this type of work. I kept telling myself that as long as I was born in Japan, bringing spaceships, robots and kaiju to the screen was already a dream come true. But then Godzilla brought me to this incredible place. Thank you, Godzilla! And I believe many of you can relate when I say I want to praise my young self for choosing to pursue this career. Thank you for this great honor.VES Chair Kim Davidson, VES Executive Director Nancy Ward, Michael Dougherty and Takashi Yamazaki.0 Comments ·0 Shares ·9 Views
-
VFX SCHOOLS ADAPT TO CAPTURE INDUSTRY SHIFT TO VP AND AIwww.vfxvoice.comBy CHRIS McGOWANThe Martin Scorsese Virtual Production Center at NYUs Tisch School of the Arts. (Image courtesy of Tisch School of the Arts)As far as VFX education, we are constantly seeing new pieces of software and technology being implemented into the pipeline. That is something we are always grappling with when it comes to learning, says Professor Flip Phillips of The School of Film and Animation at Rochester Institute of Technology (RIT). Each VFX and animation school explores the implementation of new tech differently, such as virtual production and AI.The film and media industry is experiencing a significant shift toward LED stages and virtual production technologies, fundamentally changing how stories are told and content is created, comments Patrick Alexander, Department Head at Ringling College Film Program. Real-time visualization capabilities, combined with the seamless integration of physical and digital elements, provide creators with unprecedented creative control and production efficiency. At Ringling College, weve recognized this industry transformation by installing LED walls this semester and are actively developing a series of virtual production courses. From our campus in Sarasota, Florida, we can now create entire worlds and environments once thought impossible to achieve, enabling students to realize their creative visions in real-time. This curriculum development paired with our new technology will prepare students for the rapidly evolving production land-scape, where virtual production technologies enhance creative possibilities while maintaining the fundamental principles of cinematic craft.The Savannah College of Art and Design has two full-size LED volume stages. (Image courtesy of SCAD)We have a soundstage in the MAGIC Center that houses a 32 x 16 LED wall, and the center provides technical support for motion capture, camera tracking, virtual art department and real-time in-camera visual effects. Having the opportunity to see and work with a virtualproduction stage is a great asset for our graduates.Professor Flip Phillips, The School of Film and Animation, Rochester Institute of TechnologyRIT has worked to ensure our students have experience working in virtual production before they leave our campus,Phillips states. We have a soundstage in the MAGIC Centerthat houses a 32 x 16 LED wall, and the center provides technical support for motion capture, camera tracking, virtual art department and real-time in-camera visual effects. Having the opportunity to see and work with a virtual production stage is a great asset for our graduates.To meet the growing emphasis on LED stages and virtual production, Vancouver Film School is launching a VP course alongside Pixomondo in 2025 to meet the industry needs in this area, says Colin Giles, Head of The School for Animation & VFX at Vancouver Film School. The 12-week certificate program will teach the fundamentals of content creation for virtual production in VFX.NYUs Tisch School of the Arts has opened the Martin Scorsese Virtual Production Center, made possible by a major donationfrom the Hobson/Lucas Family Foundations by Mellody Hobson, Co-CEO of Ariel Investments, and filmmaker George Lucas. The facility features two 3,500-square-foot double-height, column-free virtual production stages, with motion capture and VP technology outfitted by Vicon and Lux Machina, and two 1,800-square-foot television studios and state-of-the-art broadcast and control rooms.Savannah College of Art and Design has two full-size LED volume stages where students can get their hands on Mandalorian-style production techniques, as well as classes in virtual production, photogrammetry and real-time lighting, according to Gray Marshall, Chair of Visual Effects at SCAD. We have even launched a multidisciplinary minor in virtual production, bringing visual effects, production design and film students together to create complete and inventive in-camera VFX projects.Working at The School of Visual Arts in New York City. (Image courtesy of SVA)A VR session at The School of Visual Arts in New York City. (Image courtesy of SVA)The amount of class time devoted to AI is also rapidly growing. Educational institutions have a unique opportunity to learn fromindustry standards and histories while pushing the boundaries through emerging technologies, notes Jimmy Calhoun, Chair of BFA 3D Animation and Visual Effects at The School of Visual Arts in New York City. Our students understand this responsibility. Theyre not only exploring the potential of AI but also reflecting on its impact on their rights as artists, the future of their mentors jobs and the environment.SCADs Marshall comments, AI is the most exciting trend in VFX since the advent of the computer itself. AI has already found itself ingrained in many aspects of our day-to-day tools and will continue to do so. It also creates new opportunities to rapidly try ideas, modify them and get stimulated in new directions, but it is still all under your control. Yes, there are some challenges to be faced, both regarding IP and resource utilization, but those can be worked out. I am not one of those who feels well lose jobs.Marshall continues, I watched as computers displaced some older-style workers, only for a whole new style of artist to emerge in greater numbers, driving greater demands for their services. Computers have always been good at replacing repetitive jobs, and I dont think losing that class of jobs will be a loss. Since the basic premise of AI-generated images is to aggregate toward the middle ground, if youre concerned it will take your creative job, I wouldnt be. If you are truly creative, then AI isnt an exit ramp, its a launch ramp.Virtual production at the Savannah College of Art and Design. (Image courtesy of SCAD)Currently, I think there are two groups of educators that I am seeing when it comes to AI, says RITs Phillips. There is one group that is hesitant to adopt AI into their curriculum due to the lack of knowledge of how it could benefit them and their students, and another group that sees the benefits of using AI to make the VFX pipeline more efficient. I am part of the latter group. I have seen many use cases for AI to allow me and my students to deal with problems that are tedious or inefficient. There will be manymore beneficial situations for AI in the VFX field, but we still have to be mindful of the ethical issues that arise.We embrace emerging technologies like AI as valuable tools to be utilized when appropriate, notes Christian Huthmacher, Professor, The Department of Motion Design, specializing in VFX at Ringling College of Art and Design. In our Visual Effects courses, students are introduced to cutting-edge tools such as The Foundrys CopyCat and Topaz AI. However, our approach goes beyond merely teaching technical proficiency. We engage students in critical discussions about the ethical considerations, potential biases and security implications surrounding AI usage. By addressing these complex topics, we ensure our students are uniquely equipped to navigate the evolving landscape of AI in the industry.On an XR Stage at the Savannah School of Art and Design. (Image courtesy of SCAD)We are embracing [AI] when it makes sense, says Ria Ambrose Benard, Director/Senior Educational Administrator at The Rookies Academy (formerly Los Boys Studios). Tools are being created to help artists with the mundane portion of the job and offer more time for the more difficult and rewarding shots. Much like spell check, it is a tool people use regularly, and sometimesit is great, but sometimes it is not. AI is a tool that can be utilized correctly. Its not always the best solution, and often an artist will get better results faster, but it is a tool that can be used in some circumstances to make things easier for the artists.One goal that I always strive for in my classrooms is allowing students to problem-solve using any and all tools available, comments RITs Phillips. I believe this will allow for the industry to continue to evolve and become a place where creativity, design and innovation will help tell the stories in a more unique and beautiful way. Our field is constantly evolving, and that is what makes it exciting. The unknown can be a scary place for some, but I see it as an opportunity to make great strides in VFX.0 Comments ·0 Shares ·6 Views
-
HONORING THE RETRO-FUTURE OF THE ELECTRIC STATEwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Netflix.The reliance on visual effects surpassed what Anthony and Joe Russo had previously done in the MCU.Even though the term retro-future is nothing new, Swedish artist and musician Simon Stlenhag has been able to create a unique vision where discarded technology is scattered across vast landscapes situated in an alternative universe. His illustrations and accompanying narratives have captured the attention of filmmakers such as Nathaniel Halpern with Tales from the Loop for Prime Video and siblings Anthony and Joe Russo with TheElectric State for Netflix, which they spent seven years developing. The latter revolves around the aftermath of a robot uprising where an orphaned teenager goes on a cross-country journey to find her lost brother. The human cast consists of Millie Bobby Brown, Chris Pratt, Stanley Tucci, Giancarlo Esposito, Ke Huy Quan and Jason Alexander. At the same time, Woody Harrelson, Anthony Mackie, Brian Cox, Alan Tudyk, Hank Azaria and Colman Domingo voiced the many mechanical co-stars.Simon Stlenhags original artwork electrified us, producer/ director Anthony Russo remarks. Its this strange feeling of familiarity in what hes drawn and also strangeness. Its a historical period that you can recognize whether or not you lived through it, but its not exactly what that period was. There are elements from the 1990s. The story is a parable, producer/director Joe Russo notes. Its less about nostalgia than it is about the idea that technology could have developed faster and maybe deviated humanity from its main path. That was the fun part, thinking through those little elements that allowed us to create a new interpretation of the 1990s. The story taps into the present-day fears of AI usurping its human creators. Part of what we want to do is explore the fact that you can find humanity in technology and inhumanity in humans, states Anthony Russo. We have both of those experiences in our lives and world. Joe and I are technologists. We use technology throughout our lives to tell stories, but at the same time, we all know that technology is powerful and can cause problems, whether the nuclear bomb or social media. Its us recognizing the complex relationship that we all have with technology as human beings.A complex sequence to execute was 20-foot Herman carrying a Volkswagen campervan containing Michelle, Keats and Cosmo on his shoulder.Determining how the robots would be created and executed was a major topic of discussion as they are 80% of the cast. This istrue for all of our projects when youre dealing with a fantasy world that needs to be created from whole cloth that doesnt exist, states Anthony Russo. The methodology used to create that is always a question. It is driven by how Joe and I see the movie. What are we trying to achieve? What do we want to do with the scenes? How do we want to stage things? How do we want the actors to interact with other characters in the movie who may not be played on the screen by physical actors? These all become questions in terms of what is the right methodology to use to create the film. We were involved with our Visual Effects Supervisor, Matthew Butler, to determine the proper methodologies. Because there are so many characters in it that dont exist in reality, we had to rely upon visual effects to create a huge portion of the film.Dennis Gassner and Richard Johnson, who consulted robotic companies, shared production designer duties. I was in charge of making a real walking and moving Cosmo, states Production Designer Richard Johnson. I had to go to every robotics company in the world that would listen to me. The immediate refrain was. The head is a deal-breaker. It throws him immediately out of balance. If you look at all of the real robots that are popping up on the Internet today, they all have tiny heads. The other limiting factor was height. They were all in the zone of 5 7 or less. I now know more about real robots than I ever expected to know my entire life! The robots had to be distinct in their own right. We felt the robots with more screen time needed to be more iconic-looking, so we looked at iconic products or services or things from the last two, three or four decades. Mr. Peanut is very well-known brand name. We thought, He could be a robot. Baseball player. Very iconic. Be a robot.The drones went through a major design change where the heads resemble neurocasters and had a screen that projected the face of the pilot.The drones went through a major design change where the heads resemble neurocasters and had a screen that projected the face of the pilot.Inspiring the imagery was Swedish artist-musician Simon Stlenhag, who has developed a retro-tech and alternative-world visual aesthetic.Digital Domain was responsible for the shot in which Michelle, Keats, Herman and Cosmo are captured and taken into a mall that has become a communal place for exiled robots.Wireframe pass of Herman taking Keats (Chris Pratt) for a ride.Animation pass of Herman taking Keats (Chris Pratt) for a ride.Final composite pass of Herman taking Keats (Chris Pratt) for a ride.Approximately 2,000 visual effects shots are in the final film, with Digital Domain and ILM being the main vendors, followed by Storm Studios, One of Us, Lola VFX and an in-house team.Were not in a world of magic, observes Visual Effects Supervisor Matthew Butler. The idea is that these robots were often designed to make us feel comfortable about them serving us. I fought tooth and nail to put in little piston rod push-pulls and things that could justify that Cosmo could actually move. If we designed a particular ball joint or cylindrical actuator or pitch actuator, we made sure that the motion of these robots was restricted to what that could do. Artistic license was taken with the original design by Simon Stlenhag. I wanted Cosmos eyes to have some emotion. Rather than be just a painted pupil as in the book, we made a smoked glass lens for the pupil that you can see behind it that there is a camera. Artistically, we let those eyes have a gratuitous green light to them. Now, you have a twinkle of an eye and can get emotion into that eye. That was another tricky thing. It was about getting enough emotion into them without breaking the silhouette of the design of the robots that we needed to adhere to that was hard, Butler says.Keatss (Chris Pratt) robot sidekick, Herman, comes in different sizes. It was always the Russian doll thing where the one size smaller fits into the one size bigger, Butler remarks. We did honor the style and personality but not at the expense of physics. Mostof the movie is four-foot Herman with Anthony Mackies [vocal] and Martin Klebbas [mocap] performances. Its also coming out of the script. Hes this sarcastic character. I love his personality, and it came through extremely well. Herman borrowed the power extension cable for his devices and forgot to return it. Meanwhile, all of Keats food in the fridge has gone bad. Herman has messed up, and hes like a guilty teenager shuffling around on the couch, deliberately avoiding eye contact with Keats because hes this busted little kid. That character is amazing, and it carries through the movie well. Chris Pratt was perfect for this, and it works so well for the two of them. Its most peoples favorite relationship in the movie.Remnants from the robot rebellion are scattered throughout the landscapes.Following the example of animated features, Lead Storyboard Artist Darrin Denlinger storyboarded and assembled the entire film into an animatic. I had a version of the movie before we started shooting, or close to when we started shooting, that was a layout, states Jeffrey Ford, Executive Producer and Editor. It had all the storyboards cut together in sequence with sound, music and subtitles instead of dialogue. I used that as a layout to guideus throughout production so we knew roughly how scenes would play out. Of course, things change on the day; actors re-block the scenes. The amount of footage captured surpassed Avengers: Infinity War and Avengers: Endgame. Ford explains, This film that had to be made multiple times because when you deal withanimated characters carrying this much weight dramatically, those performances are created in passes. You may shoot a proxy pass where Millie interacts with Cosmo, and its a motion capture actor. Then you may shoot a pass where she is interacting with nothing. We might go back and shoot that same performance on the mocap stage multiple times. We may end up with various iterations of those visual effects as they come in over the months. An enormous number of iterations go on, and when you do that, you generate enormous amounts of footage.Atlanta doubled as the American Southwest. Tumbleweeds and sagebrush, the basic things a person sees in the Southwest, do not exist in Atlanta or anywhere near there, Johnson notes. We had to fill several trucks with all that stuff and bring it into Atlanta. Parts of Atlanta have not changed for years. Through the cameras eye, if you wanted to say that it was the 1970s or 1980s, it waseasy because nobody had built modern or contemporary homes in those neighborhoods for whatever reason. The same thing happened when selecting the locations for the battles in the city. If you put in the right period of car, voil! Youre in that era. A question that had to be answered was where exiled robots responsible for the uprising would live. The X was in the script and is a large area in the Southwest. Where would these guys go? A country club? A department store? Football stadium? We landed on a shopping mall. It dawned on me one day that the only reason they would go to a place like this is to recharge themselves or maybe for repairs. Thats why in the shopping mall, you see a lot of wires, batteries and charging stations.Remnants from the robot rebellion are scattered throughout the landscapes.It was imperative for believability that Hermans movements be grounded in real physics.Approximately 2,000 visual effects shots are in the final film, with Digital Domain and ILM being the main vendors, followed by Storm Studios, One of Us, Lola VFX and an in-house team.Lighting played a key role in seamlessly integrating live-action Michelle with CG Cosmo.Humans become more robotic because of their addiction to the neurocaster, which can transmit ideas and feelings via a virtual network.Along with the final battle, which is almost entirely synthetic apart from the Keatss interaction with the grass, a virtual meeting occurs between antagonists Ethan Skate (Stanley Tucci) and Colonel Bradbury (Giancarlo Esposito). Its where Ethan pitches the Colonel to go into the X to get Cosmo back, Butler recalls. Bradbury puts on the neurocaster and teleports into this virtualenvironment that takes place in this beautiful mountain lake scene. Skate is standing in the middle of the lake while Bradbury is on terra firma and gently steps out to have a conversation with him. Production didnt go to Norway. Its just a couple of guys and girls from Storm Studios [located in Oslo] with some backpacks hiking out into some beautiful locations in the summer for reference. We had a shallow pool with plexiglass an inch under the surface so we could have Stanley and Giancarlo stand in the water. The only thing that we kept was the little bit of interaction of water locally right at their feet while the rest was digital water. The ripples needed to come out and propagate out into the lake as it would for real. Its absolutely stunning work.The hardest shot to execute was when Cosmo, Herman, Keats and Michelle (Millie Bobby Brown) have been captured andare escorted into the mall. They walk into this forecourt and finally see that the whole mall is filled with robots, Ford recalls. That shot was incredibly hard and took us months to do. The only things in that shot are Chris Pratt, Millie Bobby Brown and an empty mall. I drew maps of each frame, and we did a whole progression. We talked about where the different robots were, what they were doing, what their day was like, where they were going next and why they were moving in a certain way. We wanted it to feel like a real city. If you were to block out extras, those people would all come up with all of their mini-stories, and they would work it out. But we didnt have that. We had to figure it all out as animators. It was fun but brutal. Digital Domain did an incredible job on it. I hope people will stop and rewind the shot because its beautifully detailed and feels completely real.0 Comments ·0 Shares ·6 Views
-
RETRO EFFECTS IN A DIGITAL AGEwww.vfxvoice.comBy TREVOR HOGGDenis Villeneuve converses with DP Greig Fraser while making Dune: Part Two. (Image courtesy of Warner Bros. Pictures)In the digital age, where photorealism is achievable virtually and is getting further refined with machine learning, the visual effects industry finds itself being viewed as a double-edged sword. Whenused properly, visual effects are interwoven into a cinematic vision that could not be achieved otherwise and, when deployed badly, an instrument of laziness. This perception has been accentuated by the global depository of human knowledge and ignorance known as the Internet. In the middle of all this is a question of whether there is an actual trend of filmmakers favoring practical effects, or is it simply a marketing ploy taking advantage of public opinion?Having practical elements for actors to interact with is an essential part of the filmmaking process for Denis Villeneuve. (Image courtesy of Warner Bros. Pictures)Like with any new technology, people went a bit overboard with CGI. CGI is powerful, but it can have some limits, states director Denis Villeneuve who has created everything from a talking fishin Maelstrm, a spider in a closet in Enemy, a sandworm ride in Dune: Part Two and a traffic-congested highway in Sicario. Its all about the talents of the artists youre working with. Im not the only one. Many filmmakers realize its a balance, and the more you can capture in-camera, the better. A great visual effects supervisor will tell you the same. The pendulum is going more towards the center, in the right way, between what you can capture in-camera and what you can improve with CGI. If it was up to me, there would be no behind-the-scenes. I feel like you spend years trying to create magic, specifically with CGI, which is so delicate and fragile thatit can quickly look silly. So much work has been done to make it look real that Im always sad when we show behind the curtain. Villeneuve is not entirely against the idea of unmasking the illusion as it helps to inform and inspire the legendary filmmakers of tomorrow. When I was a kid, I read Cinefex and was excitedto know how the movie had been made, but it was a specialized publication. It wasnt wide-open clips that can be seen by millions. It was something that if you were a dedicated nerd who wanted to know about it, you had to dig for the information, but now it is spread all over the place.One of the hardest tasks for MPC was matching digital soldiers with the on-set extras in the battle sequences for Napoleon. (Image courtesy of Columbia Pictures)Having real planes shot grounded the camerawork, which was reflected in visual effects for Top Gun: Maverick. (Image courtesy of Paramount Pictures)There is this need for directors or studios to diminish the visual effects departments and put forth, We did it all in camera, notes Bryan Grill, Production VFX Supervisor for Beverly Hills Cop:Axel F, when we all know thats not the case. You put your best foot forward to do stunts and practical effects, but theres always something in there that needs some clean-up or enhancement. It has been this juxtaposition. Youve had superhero movies, which are nothing but visual effects, environments and multi-dimensions. Its overbearing. Then you have the other side, which is traditional filmmaking. Along with Eddie Murphy reprising his role of the quick-witted rogue detective named Axel Foley for the fourth time, an effort was made to recapture the 1980s roots of the franchise. What always stuck with me about the original Beverly Hills Cop was the opening scene where the truck is barreling down, hitting car after car. One of my other favorite movies from thatera was The Blues Brothers, with all of the police cars hitting each other and falling off the bridges. It was a carnage of special effects. Thats what they wanted to bring into this version as well, and they damaged a lot of cars! There were at least 20 more cars that didnt make the edit that got destroyed. The filmmakers went all out to relive and show the next generation of that type of movie.Keeping an open communication with the various departments was critical in achieving a coherent and consistent look for Barbie. (Image courtesy of Warner Bros.)Allowing enough time for the various crafts, including visual effects, leads to successfully encapsulating the vision of the director, which was the case with Barbie. (Image courtesy of Warner Bros.)Even when dealing with the artificial-looking environments found in Barbie, practical sets provide a solid foundation for the seamless integration of visual effects. (Image courtesy of Warner Bros.)There is a trend in the marketing of these films where audiences seem to want to crave an authentic experience, so theyre emphasizing the practical aspects even if most of what youre watching has been digitally replaced in post, remarks Paul Franklin, Senior VFX Supervisor at DNEG. If you think back 30 years when Jurassic Park came out, that film was marketed onthe fact that they had computer-generated dinosaurs in it for the first time. Those of us in the visual effects world who are familiar with that film know that the majority of the dinosaurs that we saw on the screen were Stan Winstons animatronics that were created practically. If that film was being released today, it would be all of this stuff about Stan Winston building these dinosaurs as animatronics. That being said, practical effects aspirations do exist. There are a lot of filmmakers who have seen the success of Interstellar, The Dark Knight movies and recently Oppenheimer,and the way that Christopher Nolan leans into the practical aspect of what he does. Theyre going, Thats an effective way to tellyour story. A lot of filmmakers aspire to that. Whether there are so many of them who can pull it off is a different thing because it turns out to be quite difficult to do that balancing act. I got lucky and worked with Chris for 10 years, and he is a genius filmmaker. Idont know if there is anybody else quite like him. Steven Spielberg in the days when he was making films such as Saving Private Ryan and Jurassic Park; hes a filmmaker who knew the value of doing things practically, which is why he would always want to work with Stan Winston. You look back at E.T. the Extra-Terrestrial, and it still holds up because they used state-of-the-art animatronics and practical effects at the time.For a good decade or more, there was this ability for visual effects teams to provide directors and producers with shots that were extraordinary in their ability to break some of the traditional filmmaking rules and to move away from some of the things that made cinema look the way it had for the decades before that, notes Ryan Tudhope, Production VFX Supervisor on Top Gun: Maverick.That created a bit of a look and, just like anything, looks comeand go. One thing, if you think about it, over the course of all of filmmaking, is that every single shot of every single movie has one thing in common, which is that it was shot through a camera. Then came along the digital ability to create digital cameras and shots. That freed us up for a long time to be able to do things with those cameras that had never been done before. When I think about it in terms of what Im trying to do, it is to recognize what that camera means to the artform and to honor that by trying to design a shot that appreciates what the camera can do and should do, how itvisualizes the world, and how the audience sees the film or shot or whatever the action might be through that limitation. When you dont respect that, it can be visually stunning and impressive shots, but the audience immediately knows that its not real.In an effort to capture the spirit of the original movie, an actual helicopter was flown for Beverly Hills Cop: Axel F. (Image courtesy of Netflix)Were trying to have our cake and eat it too, believes Aaron Weintraub, VFX Supervisor at MPC. The highest compliment we can ever be paid is if people have no idea that we did anything thats what we strive for. What we do is stagecraft. Were trying to fool the audience into thinking that something is completely real and was there in front of the camera; they recorded it,and thats what you get to see on the screen. If we have done it correctly, nobody knows. Real-life examples are the starting point. Weintraub explains, Everything that we do is looking at photographs and film footage and trying to replicate how the light and surfaces react. Were nothing without reference of the real world, if the real world and photorealism is our goal. Technology is the means to the end. Every iteration and every step that we take with the technology is something new that the audience may not ever have seen before. Reacting to the newness is part of saying that the technology is driving it, but its a story that were trying to tell that we couldnt do in the past. Reality can provide a sense of spontaneity to animation. Weintraub notes, There are mistakes and happy accidents that can happen when you shoot real stuffthat you might not get otherwise. When we did Guillermo del Toros Pinocchio, which was stop-motion animation, one of the guiding principles of the animation was to try to anticipate all of those weird little accidents that if you were shooting this live-action and put those into the animation. In advance of shooting the stop-motion, the animators would shoot these little videos of their clips to see what would happen.Actual cars were flipped and digitally augmented for Beverly Hills Cop: Axel F. (Image courtesy of Netflix)Joseph Gordon-Levitt and Eddie Murphy prepare for a scene that takes place inside of a helicopter cockpit for Beverly Hills Cop: Axel F. (Image courtesy of Netflix)Its a stylistic thing. When were talking about the marriage of practical, whats shot on set and where we come into play, either augmenting or completely replacing it in some cases, there is always this desire to maintain this visual characteristic that is inherent in practical shooting, observes Robin Hackl, Co-Founder and Visual Effects Supervisor at Image Engine. Its being art-directed and driven by the DP, lighting director and the director himself, and they have hands on the physicality of thatbeing on set and getting that look. Its always that apprehension almost of committing fully to CG and leaving it to the hands of the visual effects vendors and artists, even things that they are implementing in practical set photography, like virtual sets that give a higher degree of reality to the lighting of the characters. From the feedback that Ive gotten from the people on set, the actors in particular react well to virtual production in the sense that they have something tangible to react to and see, to be part of that little world, which is sometimes hard for them to wrap theirheads around. What is that emotion tied around that environment theyre in? It heightens that. Of course, its not all done that way.The Mandalorian is all over the place, from on-set photography to giant bluescreens and virtual production. The Mandalorian is a good example of aesthetic. The creators wanted to retain asmuch as possible the flavor and vibe of the original Star Wars films. There were a lot of optical effects done back in the early days and stop-motion that was live in-camera, for the most part they triedto shoot everything in-camera the best they could then augment it. Its a real harkening back to that era.Christopher Nolan has been an adamant proponent of in-camera effects, with Batman Begins being one of most grounded superhero movies. (Image courtesy of Warner Bros.)Practical vehicles such as the Batpod were built for The Dark Knight. (Image courtesy of Warner Bros.)Its more about having a good dialogue with the people you are working with and making sure that they understand how to get the best out of the tools theyre using, states Glen Pratt, Production VFX Supervisor on Barbie. If Im blunt, its often because bad choices are made. If you allow all the crafts that are involved in filmmaking the time that they say [they need], you get a good result, whether it be building a set, creating pyrotechnic explosions, then equally whatever aspects of visual effects youre adding into that.Open communication is important. Greta Gerwig hadnt done visual effects before, so I sat down early with her and talked through various sets of tools that we have at our disposal. I could tell she was overwhelmed by some of those things. But its honing it down to that is just a step in the process of how we will eventually get to the end result. That comes with experience. The more that you work with bringing visual effects into it, the more well-versed the director becomes with the language. A lot of the time, they dont want to know that level of detail. They only want to know that you have their back and you can do this for them. Barbie Land was an artificial environment, but the same photorealistic principles applied. We captured everything so we could recreate whether it be the actual stages themselves or miniature models, and often we embellished them further on what was there to ground it, make it feel like it belongs in that world and had a cohesive, insistent aesthetic running through it.Visual effects are best when they reflect the filmmaking style, so a stop-motion animation feel was given to the simulations for Guillermo del Toros Pinocchio. (Images courtesy of Netflix)Personally, when the conditions and the type of effect to be achieved allow me to use the practical, I jump at it, remarks Mathieu Dupuis, VFX Supervisor at Rodeo FX. Were fortunate to have a studio at our disposal here at Rodeo FX, and I cant imagine executing some of the large-scale effects on our recent projects without the support of practical effects. Im not just talking about blood splattering on a greenscreen or crowd duplication, which, by the way, is always highly effective. Being able to rebuild scaled-down set pieces [painted green] to capture the precise interaction of, say, glass breaking on a table or to recreate an organic dream effect by filming floating debris in macro within an aquarium allows us to achieve quick, cost-effective results that are both efficient and innovative. Theres also the advantage of avoiding endless discussions with clients by capturing how a flag moves in the wind or how a plate shatters. Theres no need to imagine or convince anyone how these elements would behave because weve captured them in real life. Theres nothing more authentic than reality, right!?Visual effects are best when they reflect the filmmaking style, so a stop-motion animation feel was given to the simulations for Guillermo del Toros Pinocchio. (Images courtesy of Netflix)Its a stylistic thing. When were talking about the marriage of practical, whats shot on set and where we come into play, either augmenting or completely replacing it in some cases, there is always this desire to maintain this visual characteristic that is inherent in practical shooting.Robin Hackl, Co-Founder and Visual Effects Supervisor, Image Engine0 Comments ·0 Shares ·7 Views
-
MEMOIR OF A SNAIL: INSIDE THE SHELLwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Arenamedia Pty Ltd. and IFC Films.Along with being surrounded by snail memorabilia, Grace finds herself responsible for an ever-growing population of frisky guinea pigs.Making the most of the global shutdown caused by the COVID-19 pandemic, Australian filmmaker Adam Elliot mapped out what would become an Oscar-nominee favorite for Best Animated Feature and an Annecy winner. Memoir of a Snail tells the tale of Grace, a hoarder of snail memorabilia who longs to be reunited with her twin brother Gilbert while experiencing the trials and tribulations of becoming an adult. Previously, Elliot made his feature film directorial debut in 2009 with Mary and Max, but interestingly, the methodology and technology between the two productions have not altered much.The technology has changed, states Adam Elliot, Producer, Director, Production Designer and Writer. Dragonframe is a wonderful tool. The animators love it because they can do all sorts of tricks, and its got all sorts of bells and whistles. I apply many restrictions on my animators and try to get them not to rely on the software too much, animate from intuition and celebrate happy accidents. We have LED lights now, so the stages arent as hot. The globes dont burn out as quickly. Sound editing and design are far more digitized. The sound libraries are bigger. Cameras have gotten higher megapixels, so the resolution is much higher. Having said all that, theyre just tools. We still try to animate in a traditional manner. Everything in front of the camera is done traditionally. We dont do any CGI additions. However, we certainly do cleanup digitally, like removing rigs. All our special effects, like fire, rain and water, are all handmade. The fire is yellow cellophane. We celebrate the old, but we certainly embrace the new.Pinky helps Grace to break out of her shell and experience life.We can say to the audience that you can hold in your hand every prop and character you have just seen. However, there was a lot of post-production to make it look that way! There were roughly 1,500 storyboard panels. Then I drew and designed all of the characters [200], most of the props [5,000 to 7,000] and sets [200]. I drew by hand because I was in lockdown during COVID-19 and had a lot of time!Adam Elliot, Producer, Director, Production Designer and WriterVisual effects have come a long way, allowing for more creative freedom. In the old days, we would use fishing line to have things airborne, and now we can have a big metal rod and the visual effects artists remove that digitally, Elliot notes. Thats about it. Its just cleanup. There is a lot of compositing. For elements like fire, we do it on a piece of glass with the camera looking down, often with a greenscreen background, then we composite that in post. We had 600 visual effects shots in the film, and a lot of money spent on the visual effects, but it was mostly basic stuff. One of the more complex effects was the burning church. Those flames are recycled and layered, Elliot explains. We do one set of flames, then cut, paste and layer them. The claim is that we can say to the audience that you can hold in your hand every prop and character you have just seen. However, there was a lot of post-production to make it look that way! Every shot was storyboarded. There were roughly 1,500 storyboard panels. Then I drew and designed all of the characters [200], most of the props [5,000 to 7,000] and sets[200]. I drew by hand because I was in lockdown during COVID-19 and had a lot of time!Director Adam Elliot works on the adult Grace puppet surrounded by her character designs.The most dynamic character is Pinky, who required a selection of heads.Grace doubles up on Chiko Rolls, inspired by the Chinese spring roll and first sold in Australia in 1951 as the Chicken Roll despite not actually containing chicken.The puppets had to reflect the stages and ages the characters go through, such as Grace having a scar caused by having her cleft palate surgically removed.1,600 storyboards were created by Adam Elliot with each one representing a shot in the film.Compositing was only used where necessary. Most of the skies you see in the film were on set on giant canvases, Elliot remarks. There were only one or two skies or maybe more where we did greenscreen then composited in one of the canvas skies. It is a wonderful tool that now liberates us as stop-motion animators. When I left film school in 1996, I was told I was pursuing a dying art form and that stop motion would be obliterated by CGI. The complete opposite has happened. CGI and digital tools have liberated us. You have to be careful not to get carried away. There is a hybrid look that has gone a bit far, and now with 3D printers, too. Some of these stop-motion films almost look computer-animated because theyre so slick. Were trying to celebrate the lumps and bumps, brushstrokes, and fingerprints on the clay.The design of the characters was based on what Elliot was able to accomplish in his one-room apartment. Adam started making everything out of Apoxie Sculpt, which is this material that sculpts like clay, then goes rock-hard in about an hour, explains Animation Supervisor John Lewis. We had stylized the characters around the fact that they were solid and budgeted around it as well. Creating a characters costume out of real fabric or silicone that can bend is time-consuming and expensive. Adam wanted us to have these rock-hard-solid puppets. Theyre like statues; in some sense, thats easy because it restricts what the puppet can do. Sometimes, when you dont move the head, the ear clunks into the shoulder, and you cant move it where you want it to move, or body movements are stiffer or different than how you might move it otherwise. Adam wrote walking out of the film as a stylistic and budgetary choice. If we did have characters walking full frame where you can see their feet, we would have had to have legs that could bend and pants. Instead, we cropped it with the camera, removed the legs and put a little up-and-down rig underneath, which was a cheap microscope stand. Its about the same size as the puppets legs. We put that on the table and slide the character along; thats how we get dynamic movement.Reflections of the puppet rig appeared in the metal tray containing the pineapple chunks and had to be painted out.Voiceover narrative figures prominently in the storytelling. Every shot is cut into the animatic so it has the piece of narration that goes with it as well as the music ideally. Some of that [narration] is scratch and some of it is filled with the real stuff as we go, Lewis remarks. You will listen to the narration for every shot that is only five to 10 seconds long. You will time your animation out and express your character differently, depending on what the narrator is saying. They play into each other. A rhythm gets established with the animation. Each character will start to lend itself to different expressions and movements. The actors voice is a huge guide for that. As an animator, you find that some things are working and keep doing them. Some things dont work, so you might cut back. Between all the animators, we talk and look at each others work. Slowly, a language of that character will develop. Because Grace doesnt have confidence, shes often got her hands tucked up to her chest, has her arms in and holds herself tightly. I talked to the animators about how much tension was in the character. Gilbert is bolder, so hes more likely to be striking big poses with his arms out. Adam has nuanced rules about how he likes each character to look and move when it comes to the blinks and shapes of their eyelids. Certain things make characters look like Adam Elliot characters.A surreal moment is when Gilbert and Grace appear inside their mothers womb, which serves to emphasize the strong bond that exists between the siblings.Pinky also required a wide variety of mouths.One of the nine stages had an under-camera rig where a camera could swing down onto a glass tabletop. Every time an animator had some downtime, they would go onto that stage and do a bit of effects work, explains Production Manager and VFX Supervisor Braiden Asciak. The fire was orange and yellow cellophane,and the smoke was cottonwood. They would do these effects,sometimes to the shot. Theyd load the shot we had completed into Dragonframe as a reference and animate those effects elements on top of the animation they had already animated prior. It meant that quite a few of the elements we shot were specific to the shot. However, we could reuse certain elements from those other shots. The visual effects team consisted of Asciak, Visual Effects Editor Belinda Fithie, Gemila Iezzi [Post Producer at Soundfirm) and four to five digital artists at Soundfirm. They were simple 2D visual effects, so we didnt need Maya or Houdini. The visual effects artists mostly used Nuke or DaVinci Resolve Fusion [a node-based workflow built into Resolve with 2D and 3D tools].Pinky does her burlesque dance routine.Rather than use fabric, all the costumes were sculpted and painted onto the puppets.The church burning-down sequence was challenging but rewarding. Once we shot all the plates we needed throughout that sequence, we put it into the timeline. Everything was worked out together, Asciak states. How was Gilbert going to move around that church? Then we slowly animated the effects elements. Once we finished production, John Lewis spent several weeks doing as many effects elements as possible. We built an extensive effects library of cellophane fire, and he did two big shots one of Gilbert in the church and did some compositing in Adobe After Effects himself. Then, John did an exterior shotof the church on fire with the huge fire bursting out of the roof. When John left, it was up to me to layer out the rest of that sequence and ensure continuity. I was doing everything in DaVinci Resolve. Some shots have 15 layers of fire, smoke and embers to try to get it right to a level that Adam was happy with, and it looked compositionally right.Creating a sense of peril was critical for the moment Gilbert attempted to rescue a snail in the middle of a busy road. We had a number of cars animated on greenscreen so we could composite those in later, Asciak reveals. But the angle of those plates wasnt matching the shots. Adam wanted the cars going in from the left and out from theright to speed by. You had to match that up with the performance to avoid covering the key moments of a laugh or yell. I spent a good two-to-three weeks trying to time all those cars and each of those shots. Also, we didnt have any sound at that point, so the sound was done to the visual effects work. I felt like we needed to ampup the intensity of that sequence and bolster it with a lot of cars. One way of doing that was [to transition between shots] using side swipes with the light poles, or there was a van that drove by. That was a good way to improve the cutting in the sequence to make it flow and feel chaotic.Preparing for the scene where Grace and Ken get married with Pinky in attendance.Close attention had to be paid to ensure all the rigs were painted out. If there was an element of the shot before the rig appeared, then we wouldnt need a clean plate, Asciak states. But for something like Pinky tap-dancing on a table doing the burlesque, we had to go frame-by-frame and paint out that rig from the mirror. In some cases, its a little sliver of a rig. When shes in the pineapple suit and holding out the plate of pineapple chunks, there is a rig that is like a straight line behind her. If you werent paying attention, you probably wouldnt have noticed, but there was a sliver of a rig, and even the visual effects artists asked, Where is the rig?Exploring the facial expressions of Grace.Exploring the facial expressions of Grace and Gilberts father Percy.Exploring the facial expressions of Pinky with her oversized eyeglasses.Graces bedroom had the most scenes, which were shot chronologically. That set was there for 16 weeks, over half of the shoot period. It goes from being empty to being full of [snail memorabilia], then we return to it being empty. When Grace is sitting in her bed and says she is surrounded by her snail fortress and the camera pulls out, as soon as we finished that shot, we pulled everyone into the kitchen where we had the TV screening room. We watched that shot together for the first time, and it was magical. At that point, we knew we had something on our hands, as we could clearly see the work of the art, animation, lighting and camera departments.The set that had the most scenes was Graces bedroom.0 Comments ·0 Shares ·7 Views
-
VIRTUAL PRODUCTION NOW AND GOING FORWARDwww.vfxvoice.comBy TREVOR HOGGPreparing for a virtual production shoot of a Vertibird featured in Fallout. (Image courtesy of All of it Now)Has virtual production revolutionized filmmaking, beginning with The Mandalorian in 2019, and accelerated by the COVID-19 pandemic a year later? The answer is no, but the methodology has become an accepted alternative to bluescreen and greenscreen. Even though technology continues to advance at a rapidly, some things have remained the same. Its a mixed bag, states Matt Jacobs, VFX Supervisor. Whats on my mind now when talking to people is building brick-and-mortar facilities. There was a project constructing a backlot in France, and I asked, Did you set up an LED volume because youve sunk a lot of money into this? And theyre like, No, because every time we do an LED volume, it seems that the ask is different for what the volume needs to do. Everybody comes in and says, I need it for process shots for cars. Or, Im doing playback, and I need the volume to be this size and configuration. The ability to pop up a volume, be flexible and build the volume out to case-specific specs seems to be the way to go these days.Companies like Magicbox offer a tractor-trailer studio setup. The pop-up trailer is an interesting thing, but you also have to look at that as a set configuration, Jacobs notes. Yes, its mobile, but its what the tractor trailer looks like. Do you need a volume that is semicircle? Do you need the ceiling, or is that lighting? How are you going to work a volume with a known configuration of width and height? Is it squared-off walls or a circular volume? Does it have ceiling panels that you need for reflections in a car? How are those ceiling panels configured? I was on a Netflix shoot, and we had this great volume at Cinecitt Studios outside of Rome. It was a cool setup and a big stage. The floor was a Lazy Susan, so it actually spun around. The ceiling was great, but because the tiles didnt line up perfectly, there were lines and seams across the car where there were no reflections. We had to bring in walls to do fill reflection on the front of the car. We had to do a lot of work to reconfigure that stage and bring in certain elements. Thankfully, they were nimble and had a lot of great pieces and solutions for us to work with. But it goes back to the point that the stage was probably too big for certain things, and maybe it wasnt perfect for our car shoot.LED walls are beneficial for rendering content for backgrounds but often fall short as a lighting instrument. (Image courtesy of Disney+ and Lucasfilm Ltd.)Generally, people think that virtual production is synonymous with the LED volume. I think virtual production is anytime that youre using real-time technologies in conjunction with normal production, remarks Ben Lumsden, Executive Producer at Dimension Studio. The biggest single change is you can push a lot more through Unreal Engine. Youve got a whole suite of tools specifically addressing LED volume methodologies. Theres the switchboard app and level snapshots that allow you to go back to a period of time when there was that particular load on the volume and understand exactly where everything was, which animation was where and what the lighting setup was. On Avatar, James Cameron would get so frustrated because everything was done using MotionBuilder. Cameron would return to post-production after being on set, and all the creative changes he made on the day got lost in translation through the pipeline. MegaLights from Unreal Engine 5.5 is a huge step forward. Lumsden says, Beforehand, it was geometry, which was too expensive. But then Nanite came along with Unreal Engine 5, meaning geometry was no longer an issue. Our experiments with MegaLights so far suggest that lights will no longer be an issue.Limitations still exist regarding how much you can put on the LED wall in terms of computational power. (Image courtesy of Dimension Studio, DNEG and Apple TV+)Westworld Season 4 made use of virtual production technology to expand the scope of the world-building. (Image courtesy of Technicolor and HBO)Limitations still exist regarding to how much you can put on the LED wall in terms of computational power. You dont want to drive too many metahumans, for instance, but you can put loads of volumetrically-captured people and make sure that their card is pointed back to the camera or their rendered view is relative to the position of the camera, Lumsden notes. One thing that we did that was cool regarding R&D is marrying our performance-capture technology with the LED virtual production. Weve been doing some tests where we can actually drive metahumans on the wall as digital extras being live-puppeteered on a mocap stage and interacting with the real talent; thats a new technology or workflow that we may well bring into production going forward. Sound remains problematic. There is a real issue with capturing audio because youve got this big echo chamber. There are some fantastic new LED panels coming out all of the time. But the great new panels are always expensive. Over time, that will change, as with all of these things. There are also some new and interesting technologies of people doing projector-based methodologies, which are intriguing because the price point is more applicable to indie filmmakers.The most significant single change is that Unreal Engine has a whole suite of tools specifically addressing LED volume methodologies. From Those About to Die.(Image courtesy of Dimension Studio)Virtual production is anytime real-time technologies are used in conjunction with normal production, as in Here. (Image courtesy of Dimension Studio, DNEG and TriStar Pictures)Astra Production Group has forged a partnership with Magicbox, which has developed a mobile virtual production studio setup. (Image courtesy of Magicbox)Interest rates have made productions more cost-conscious and less adventurous. The early stories of the volume being a cost-saving mechanism put volume shoots at a disadvantage because producers came in expecting to see a 10x savings in cost or whatever number they had in mind, and its dramatic but not that dramatic, observes Danny Firpo, CEO & Co-Founder of All of it Now. Now, people are realizing what the volume does well, which are process shoots for vehicles or being able to create a lot of environments in a short amount of time or being able to move the environment around talent. Hardware and software have greatly improved. The expansive rate of cheap graphic cards is increasing in power and is helping to keep the dream of a real-time holodeck-style volume within arms reach. The quality of real-time graphics is increasing exponentially, and the time it takes to create those real-time environments is decreasing due to the impressive tools that have come out on the software side. Nanite and some of the impressive tools that have come out from Unreal Engine 5.3 and all of the way up to 5.5 are creating a much better environment for artists to create the best version of what they can possibly create now. In addition, were seeing a better understanding across the board of LED and camera providers and even lighting vendors of what types of equipment flourish in an LED volume environment as opposed to trying to take live show or film rental inventory and cramming it into the volume, which we saw in the volumes during the pandemic.Technicolor Creative Studios partnered with NantStudios to construct a virtual production stage in Los Angeles. (Image courtesy of Technicolor)One particular department head remains central in being able to understand and communicate the capabilities of the LED volume to other members of the production team. The visual effects supervisor is an ideal bridge because they already exist in this hybrid or mixed reality of 2D and 3D, real-time, physical and digital environments colliding to create the finished product, Firpo states. That type of thinking is more challenging for somebody from a different department like Art, Camera or Lighting and is only used to dealing with one physical reality in a real-world space. What we have discovered is specialists are emerging in those departments who have a real understanding of that and are willing to take an extra day and pre-light or go through a virtual scout and ultimately help explore those worlds more and use the same mentalities of what they would do in a physical scout. An effort has been made to make the virtual production process more intuitive for the various departments. Firpo notes, Were moving all of the extraneous tools and features that we deal with and making a simplified UI. For example, giving a DP doing a virtual location scout using an iPad, which is ubiquitous on set, a sense of a rigged virtual camera, which feels like operating a physical one but is essentially a digital portal into that world. Getting that buy-off and sense of translation from the physical into the digital world and vice versa is where its helped bridge that communication and culture gap.Technicolor, in cooperation with the American Society of Cinematographers, conducts an in-camera visual effects demo. (Image courtesy of Technicolor)Virtual production has not only revolutionized filmmaking, but the methodology has become an accepted alternative to bluescreen and greenscreen. (Image courtesy of Technicolor)LED walls are great for rendering content for backgrounds but often fall short as a lighting instrument. LED volumes have a limited brightness, and the light spreads out, so you cant create harsh shadows, notes Lukas Lepicovsky, Director of Virtual Production at Eyeline Studios. Theyre also not full spectrum light. LED walls are only RGB instead of RGBW Amber like you would get from an on-set light. You can maybe use the LED wall as fill light, but then you definitely want to be working with on-set lighting for the actual key light. Virtual production excels with short turnaround projects such as commercials because all the decisions are made upfront. If youre a massive visual effects project, then youre probably going to want to lean on it more for lighting capabilities, like projecting an explosion that lights up the actors face in a nice way, but then leave yourself room in visual effects to augment the background with giant building destruction. This is what we ended up doing with Black Adam. We made the wall be near final, or in some cases just a previs in the background that had good lighting, which had explosions and lightning elements. We used it as a lighting instrument, knowing we would replace the background afterward. It depends on the production because, in those cases, you dont always know what your final asset looks like while youre shooting a large feature production. Because its a real-time process, you have constraints of polygon budget and render time, so you cant just fill the world with all sorts of assets. You have to have strong planning when it comes to these things.The quality of real-time graphics is increasing exponentially, and the time it takes to create those real-time environments is decreasing. (Image courtesy of All of it Now)Those About to Die was shot on the LED volume stage at Cinecitt Studios in Rome. (Image courtesy of Dimension Studio, DNEG and Peacock)Interest rates have made productions more cost-conscious and less adventurous. (Image courtesy of All of it Now)Game engines have been a game-changer and are constantly improving. Where it can stand to improve still is the integration of some visual effects technology like USD and the ability to quickly share assets between departments and make layered, modifiable changes in the pipeline, Lepicovsky remarks. Also, over time, weve seen this with visual effects; things started from a rastering approach, and eventually everything turned into ray tracing. So, Im excited to see that there are also ray tracing possibilities in real-time that are coming forward both from Epic Games and Chaos Vantage, a new entrant in the virtual production market. It is still too early to judge the impact of machine learning on virtual production. Lepicovsky adds, There are machine learning tools that generate the backgrounds, but right now, they often want nice animation with all the leaves blowing and trees swaying; that is easier to do in actual game assets. Machine learning has been interesting for us in a new process called Gaussian Splatting, which is like a new version of photogrammetry based on a machine learning process. What is different from traditional photography is that you can have reflective and see-through surfaces and capture hair. Another interesting one involves a relighting process that allows you to capture actors in neutrally-lit lighting conditions, like volumetric capture, but then change the lighting afterwards using machine learning.The LED panel is excellent because its an incredibly high output, so people like to use it for the lighting, and companies like ROE Visual are adding additional colors into the diode cluster to get better skin tones, remarks Jay Spriggs, Managing Partner at Astra Production Group. But thats not going to replace a conventional lighting instrument. We know people who are researching projection in volumes because the cost to run that is much lower, and you also have additional benefits. For LEDs, the diodes light up and shoot light out, whereas, in a projection-oriented environment, they are reflective, so you have a different quality of light and mixing, which comes from that. The Light Field Lab stuff is fascinating. I dont want to even think about what the volume would cost for that! The central question is, how do you help with what is happening in the frame? From there, you reverse engineer that into what products are not just the best for whats going to happen but also the most money-efficient so that they have enough money to bring in their people. The most cost-effective way is projecting plate photography, as there are so many more complications with real-time tracking, says Spriggs. However, Unreal Engine is making major strides with a new grading workflow. That is going to be huge for making better pictures out of the game engine because one of the biggest things has always been: how do you do a final polish pass on what is already a good lighting engine but is not perfect?The Mandalorian, along with the pandemic, have been credited for causing a boom in virtual production. (Image courtesy of Disney+ and Lucasfilm Ltd.)Not everything gets treated the same way. If Greig Fraser [Cinematographer] wants to get the highest quality lighting effect for the best skin tone, but were only doing a couple of tight shots, and he has a generous post budget, then we look at the background of the LED, Spriggs explains. We build it with the highest quality LED with the smallest pitch we can find. Dont worry about the final color that you see in the picture because the post budget will kick all of that stuff out so they can post-render and grade. All we focus on is the skin tone. If someone is trying to shoot a car commercial, theyre trying to get the closest to final pixel for the reflections. You build a volume around the car that theyre looking at with the smallest pitch so that you will not be able to see individual pixels on an LED wall with a ceiling. Shoot that and walk away. You wouldnt use that same configuration for the other one because benefits wouldnt be there. Fundamentals should not be forgotten. Advises Spriggs, If we focus too much on revolutionizing and democratizing or any such big-picture thoughts, we forget about what we have to do right in front of us, which is to make a damn pretty picture!The visual effects supervisor remains the bridge in understanding and communicating the capabilities of the LED volume to the other heads of the departments. From Time Bandits. (Image courtesy of Dimension Studio, DNEG and Apple TV+)0 Comments ·0 Shares ·4 Views
-
THE RISE OF REAL-TIME VFX AND WHERE ITS GOINGwww.vfxvoice.comBy TREVOR HOGGReal-time software programs are being developed by Chaos, such as the ray tracing renderer Vantage and Arena, which does ray tracing for in-camera effects. (Images courtesy of Chaos)Virtual production could not exist without real-time rendering, customarily associated with game engines such as Unreal Engine and Unity. Still, real-time technology is also impacting workflows and pipelines constructed to produce visual effects on a daily basis. As the tool is refined to become more cinematically proficient, new challenges and opportunities have emerged for visual effects artists and production teams. My first job in the visual effects industry was working on Star Wars: Episode 1 The Phantom Menace, the first movie to do previs, recalls Kevin Baillie, Vice President and Head of Creative at Eyeline Studio. Our real-time capabilities back then were quite limited, but now fast forward to where we have these images that can look near to final quality in real-time. Not just previs, but a virtual art department to build set designs whether were looking at them through a camera, VR goggles or any other means. These incredibly powerful tools allow a filmmaker to accelerate some of the physical process, start it digitally and iterate on it quickly before we get into the tedious, expensive physical phase. When I worked with Robert Zemeckis on Pinocchio, we previsd the entire movie. As we were shooting it, we did real-time on-set composites of the scenes that involved live-action, relay down cameras for everything that was a fully virtual shot, then those cameras went into the visual effects post-production process. We made the movie three times using these real-time technologies, and that iteration helped Zemeckis narrow it down on what exactly he wanted.The introduction of full ray tracing to the virtual production process removes the need for rasterized rendering. Source: Ray Tracing FTW. (Image courtesy of Chaos)Unreal Engine became the answer when pandemic restrictions meant that not everyone could go into the same vehicle together to scout locations for The Handmaids Tale. I would go out, scan the locations, rebuild them in Unreal Engine, and we would walk through in sessions, recalls Brendan Taylor, President & VFX Supervisor at Mavericks VFX. I like to say that we are making a game called, Lets make a movie. Whats awesome about that is you can create all the rules for this world. The thing about a game is you need to be able to see it from all angles and be able to change things on the fly. When were working in film, were dealing with whats here and in the camera. Virtual scouting led to some discoveries that Elisabeth Moss applied when directing her first episode of The Handmaids Tale. Taylor explains, What we were able to do was build the set on the bluescreen stage from the plans, sit with a monitor on a little handheld rig [in our screening room] and explore the space with Elisabeth. She tried things out with just me, Stuart Biddlecombe [Cinematographer] and Paul Wierzbicki [Unreal Engine Specialist]. Elisabeth said, Theres something missing. Were so monochrome. Paul responded, Sometimes these buildings have red lights on them. He quickly put a flashing red light in the corner, and it changed the tone of the scene to give it this devilish look. It made this guy pushing women off of the roof even more menacing. We would have never known until we lived within this game we had created. For me, that was a real a-ha moment where it became collaborative again.An ambition for real-time visual effects is to have the ability to visualize, explore and iterate quickly without closing the door on the visual effects team finishing it off to get the final image. Previs from The Witcher Season 3. (Image courtesy of Cinesite and Netflix)Real-time is most useful at the concepting stage. (Image courtesy of V Technologies)Simplification is taking place when it comes to game engines and real-time. We dont have enough people who know Unreal Engine to drive a virtual production because its such a beast of a software that has been in development forever, observes Jason Starne, Owner, SHOTCALLER and Director of Virtual Production for AMS Pictures. We need some simplified things, and thats what we are starting to see with what companies like Chaos are doing. Theyre building something that allows you to have a 3D world scene that is truly a real-time path tracer, and the path tracer gives the best quality you can out of a rendered image. Real-time is an aspect of the pipeline. Its a tool just like virtual production is another toolset a studio would have. Misconceptions are an issue. The con is that the marketing has made even our clients believe this is easy to do and can be achieved without a whole lot of work going into it. In real life, we have to put work into it and make or build things in a way where we can get speed out of it. Its not just going to be real-time because its coming out of Unreal Engine. It could be, but it will look like crap. How do we get the quality versus the speed that we need?The mantra for V Technologies is content at the speed of thought, which they believe will be the next evolution of communication. (Image courtesy of V Technologies)Real-time allows digital artists to iterate way faster, which means more options for clients. Scene from Sweet Tooth. (Image courtesy of Zoic Studios and Netflix)Real-time has shifted the involvement of Zoic Studios toward the front end of the production, resulting in far less in the back end. Scene from The Sympathizer. (Image courtesy of Zoic Studios and HBO)The Chaos Group is developing real-time software programs, such as the ray tracing renderer Vantage and Arena, which does ray tracing for in-camera effects. For us, Arena is an extension of the camera that the DP already has, and as long as the DP can talk to the people who are running the stage, like to a grip or camera operator, then were in good shape, remarks Christopher Nichols, Director of Chaos Labs at the Chaos Group. We looked at what they needed to do to get the correct video on the LED walls. Essentially, we needed a system that synchronizes renders across multiple nodes and can track a camera so you can get the correct parallax. Thats the fundamental thing we added to Vantage, enabling it to become an in-camera effect solution. By introducing full ray tracing to the process that removes the need for rasterized rendering, you can make a better duplicate of the camera and dont need to optimize your data or geometry in the same way that you need for video games. Almost everything that is done in post-production uses full ray tracing, either V-Ray or Arnold. That massively cuts down on how much time and energy is used to put the CG elements behind people because its the same asset for everything. The virtual art department can focus on compositing the shot correctly or creating the right environment and not on, How do I remake this to work for a game engine?More options have become available to be creative. Were seeing concepts emerge now that would have been nearly impossible without the use of real-time tools to plan and execute, like digital twins, which are changing the game for creators, especially when budget and ambition are both high and theres no room for miscommunication, notes states Brian Solomon, Creative Technology Director at Framestore. Another area advancing rapidly revolves around how we utilize characters. Real-time allows us to previs and utilize dynamic 3D characters earlier in feature film production, especially with character-driven live-action pictures. Similarly, there are now advantages coming from production-grade real-time variants of characters. These are benefiting larger brands and animated IP owners, as a host of new formats are emerging that allow these characters to interact with the world in ways they couldnt prior and at turnaround speeds not hitherto possible. Real-time overall is broadening the horizon for characters.The visual effects pipeline at Zoic Studios has always been modular. Scene from The Boys.(Image courtesy of Zoic Studios and Prime Video)Real-time technology is positively transforming production pipelines. In the traditional visual effects world, it is allowing for faster iterations which enable additional exploration of creative options, notes Paul Salvini, Global Chief Technology Officer at DNEG. These advances are most critical in areas like animation and creature and character effects [such as the simulation of muscle, skin, hair, fur and cloth]. In cases where the final output from real-time solutions needs further processing, seamlessly connecting real-time and non-real-time tools becomes critical. The role of artists doesnt fundamentally change, but the tools will allow a more interactive workflow with better feedback. Real-time visual effects are also transforming more areas of production than ever before from previs through final render. Audience members are getting to enjoy even more immersive and interactive experiences. Salvini remarks, Some recent live and virtual concert experiences have done a great job of bringing together the best of the real and computer-generated worlds to deliver experiences never before possible for audiences, such as allowing a current artists performance to be mapped visually onto their younger selves.Technology is an ecosystem that is constantly evolving because of innovation. (Image courtesy of V Technologies)Real-time visual effects are here to stay because it is the best way to get feedback from clients or collaborators. Composite from 9-1-1. (Image courtesy of Zoic Studios and ABC)Virtual production was a key component in expanding the practical sets for Barbie Land. (Image courtesy of Framestore)More creative options have become available because of real-time visual effects. Screen capture from Agatha All Along. (Images courtesy of Framestore and Warner Bros.)Storytelling and being able to present clients with the best possible imagery are the main technological goals for Sony Pictures Imageworks, which meant figuring out how to get close to real-time with their GPU renderer Arnold. The more the client is educated with real-time and sees what the studios are doing, the more they want you to push the envelope, states Gregory Ducatel, Executive Director, Software Development at Sony Pictures Imageworks. The magic you get when you work with good creatives, clients and technology is that the creativity of those people jumps. Its crazy. Currently, if you go outside of Unreal Engine, the quality of the imagery drops, and then with lighting, it goes back up; that was not acceptable for us because artists lose the context of their work, and the creatives dont like that. This is why Spear [Sony Pictures Imageworks version of the Arnold Renderer] was brought to the table. How can we always have the highest quality possible at each given step but never go back to the previous one? The feature animation and visual effects applications are somewhat different: however, the principles remain the same. We always want better quality, more iterations. We dont want to wait for notes and for the artists to do something, then go back to notes. If you can do that in real-time, the artist can move forward, and its exactly what you want, states Ducatel.Real-time visual effects are here to stay. People who dont see that real-time is where we all should go are stuck in the past, believes Julien Brami, VFX Supervisor & Creative Director at Zoic Studios. There is time for finishing and concepting; all of these take time, but when we need the interactivity and get feedback, whether from clients or collaborators, real-time is the best tool. Real-time allows us to iterate way faster, and faster means more options. Then you can filter what is working. Instead of saying no to a client, now you have an opportunity to work with them. There are more iterations, but its less painful to iterate. The pipeline is evolving. Brami says, The visual effects pipeline at Zoic Studios has always been modular. We try to make the pipeline procedural so it can be crafted per show and be more efficient. Real-time has shifted our involvement toward the front end of the production, and we have way less in the back end. With a traditional pipeline we would have a bluescreen or greenscreen and have to key everything; all of that would have been at the tail end, which is usually more stressful.The more the client is educated with real-time and sees what the studios are doing, the more they want the envelope pushed. Scene from K-Pop: Demon Hunters. (Image courtesy of Sony Pictures Animation and Sony Pictures Imageworks)Real-time is allowing the utilization of dynamic 3D characters earlier in the process of feature film production, especially with character-driven live-action pictures. Scene from Paddington in Peru. (Image courtesy of Framestore and Columbia Pictures)Three years ago, it was all about using game engines for real-time, but with the advances in generative AI, people are doing things even more instantly. (Image courtesy of V Technologies)Technology is constantly advancing along with the growth of expectations. Virtual production, machine learning and real-time rendering engines; all of these have been around for decades, observes Mariana Acua Acosta, SVP Global Virtual Production and On-Set Services at Technicolor. Its not like it just happened overnight. What has continued to advance is our computing power. I cant even comprehend how were going to be able to maintain all of the machine learning and AI with these new generational GPUs. What has pushed these advancements forward has been virtual production, cloud workflows, machine learning, AI and the game engines themselves. To avoid obsolete technology, hardware has to be constantly updated. Its costly for a studio to be constantly updating hardware. Maybe at some point, you get a project or want to create your own project and realize you dont have enough hardware to go and run with it. Thats when the cloud comes in, as you can scale and have the best spec machines. This is crucial because then the cloud service providers are the ones that have a lot of resources to go around when it comes to RAM and GPUs.Rendering improves with each new release of Unreal Engine and Unity. Advances in real-time rendering, such as virtualized geometry with Unreal Engines Nanite, have significantly reduced the time required to optimize assets for real-time performance while enhancing their visual fidelity, observes Dan Chapman, Senior Product Manager, Real-Time & Virtual Production at Framestore. Looking ahead, Gaussian Splatting is setting a new standard for photorealism in real-time applications. By moving away from traditional polygon-based 3D models and building on Neural Radiance Fields [point clouds that encode light information], Gaussian Splatting offers a more efficient and accurate approach to rendering complex, photorealistic scenes in real-time. Real-time visual effects have raised the expectations of audiences when it comes to immersive, interactive and personalized experiences.A wrinkle in real-time visual effects is that the various render passes that the visual effects team will be utilizing cant be replicated as easily. Building the plane for Hijack. (Image courtesy of Cinesite and Netflix)Chapman remarks, Technologies like augmented reality, virtual reality and projection mapping allow attractions to respond to guest movements and decisions in real-time, creating personalized storylines and environments that feel unique to each visitor. This shift is also taking place online, where audiences are actively participating in experiences in a way that they can shape and share with others. This is particularly evident in platforms like Fortnite and Roblox, where users engage in live events, socialize with friends and collaborate on creative projects.Sometimes, real-time solutions slow down to a traditional visual effects renderer. It can go in the wrong direction if youre pushing it too far, notes Richard Clarke, Head of Visualization & VFX Supervisor at Cinesite. Im curious if we can evolve this two-stage process where you can visualize, explore, iterate quickly, and have a good idea of what your end product is going to be, but still not closing the door on allowing the visual effects team to finish it off or push it to the cloud for higher processing. What you get back is closer to a final version. One little wrinkle at the moment is the various render passes that the visual effects team will be utilizing cant be replicated as easily. The more AOVs [Arbitrary Output Variables] youre pushing out, the more youre going to slow down the real-time. Postvis is a real melding of real-time technology and visual effects pipeline workflows. The nice thing about postvis is its not an end product. Weve got a little trick where we make a beautiful scene in Arnold, bake all of the lights and textures, output shots in minutes direct from Maya and go straight into comp. They almost look final. Thats pre-packaging things. Game engines pre-capture a lot of their lighting to make real-time. Thats where you can save on a lot of processing. The more I use real-time technology, the more I think its going to be a cornerstone of everything. Autodesk showed us a beta version of Unreal Engine in Maya. I got excited about that because weve been doing it the other way around. Having Unreal Engine in your viewport was like a hallelujah moment for me because most visual effects artists are Maya-centric at the moment.As with nature, technology is an ecosystem. What were seeing right now at the top level is the merging of many new innovative technologies, states Tim Moore, CEO of V Technologies. Three years ago, it was all about using game engines for real-time, and with the advances in generative AI, you now see people doing things even more instantly. The merging of those two is interesting; to be generative inside a 3D environment where you have all the perspectives and control. Real-time is most useful at the concepting stage. For people who have simple thoughts and want an extravagant output, AI is amazing because you can give it a little and the AI will fill in the rest. For people who have a specific vision and want it to come to life, AI becomes challenging because you have to figure out how to communicate to this thing in a way where it sees what you see in your head, and you have to use words to do that. The future can be found in the mantra of V Technologies. Moore comments, The vision for our company is content at the speed of thought, and to me that is the next evolution of communication. Encoding and decoding language into sounds and words is an inefficient way to communicate, whereas the ability to use visuals as a communication layer is the most universal language in the world. Everyone perceives the world in a visual way. That ability to make visuals at the speed of thought is the big evolution of storytelling we will see in the next 10 years.0 Comments ·0 Shares ·3 Views
-
FRAMESTORE DIVES DEEP INTO THE GORGEwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Framestore and Apple Studios.A lab experiment gone wrong unleashes a toxic fog that causes mutations that need to be prevented from escaping into the world. To accomplish this mission, two guard towers were constructed on either side of the natural landmark with one occupied by a sniper from the Soviet Union and the other manned by the Western Hemisphere. Directing the Apple Studio and Skydance feature The Gorge is Scott Derrickson, who turned to Production VFX Supervisor Erik Norby to look after what could not be achieved practically.We started with a build that matched this little portion of the set, which was built and extended to be endless. Once we had done that, there were numerous discussions about how far can you see in the gorge through the fog? Is the toxic cloud sentient? Does it have its own behavior? There was a lot of look development versions to decide if it would have its own animation. We ended up with something that is a mixture.Joao Sita, Visual Effects Supervisor, FramestoreThe poisonous fog was CG so it could be art directed.When the action shifts down deep into the man-made environmental disaster, Norby relied on Framestore to produce 516 shots that include creatures known as Hollow Men and landscapes filled with atmospherics. The work was divided between Visual Effects Supervisors Pete Dionne and Joao Sita with the former situated in Vancouver and the latter in Melbourne. Visual effects artists were involved as well as the art department and pre-production team at Framestore. Its cool because this is one company where you can get all of that knowledge beforehand, so when you get to do the shot in post-production, you know the history of why this is happening from left to right, not right to left, or, I have this idea; has it been considered before? Sita states. That was great.Colored on-set lights set the tone for what was to emulated in CG.Obscuring the landscape is the prevailing fog. In the studio, they had trees, and the DP used colored lights to try to maintain some visual language and get the proper reflections and bounce, which later on we had to dial in or out depending on the shots, Sita remarks. We started with a build that matched this little portion of the set, which was built and extended to be endless. Once we had done that, there were numerous discussions about how far can you see in the gorge through the fog? Is the toxic cloud sentient? Does it have its own behavior? There was a lot of look development versions to decide if it would have its own animation. We ended up with something that is a mixture. The fog consisted of five to 10 layers that had to be art directed to not lose ones perception of the environment. We played with certain lights behind the trees to get their silhouettes through the fog or art directed the fog elements per angle or shot. There was a generic library built at the beginning, but as soon as we dropped that in, there would be lots of iterations trying to massage it to get the framing to read as expected. From the trees, there is this faint streaming element that is black and smoky; it was to enhance the idea that this forest had been through some sort of mutation or suffered from an anomaly.When we are under the mustard toxic plumes, the trees are leafless with twisted branches. In the purple environment, the trees are the ones that mutated into human or animal form. The Hollow Men were humans that once exposed to the toxic plumes, they mutated with whatever DNA was around them, and that happened with the creatures too, like the centipede that mutated into a branch-like creature or the tree roots that mutated into a crab-like creature.Joao Sita, Visual Effects Supervisor, FramestoreThe Framestore art department explores the looks for the Hollow Men and Hollow Horse as well as a crab-like root creature.Most plates had a subtle fog produced by the special effects team. The majority of the practical fog got replaced when we started doing our set extensions, Sita remarks. As soon as you have red lights hitting fog from a smoke machine you get all of this contamination of colors, so you cant rely anymore on the extractions for compositing. Also, the trees in camera had a height limit, so we had to make them taller. The trees reflect the different areas of the gorge. When we are under the mustard toxic plumes, the trees are leafless with twisted branches. In the purple environment, the trees are the ones that mutated into human or animal form. The mutants, caused by the toxic fog being unleashed by secret military labs being destroyed by an earthquake, pose a serious threat to humanity. The Hollow Men were humans that once exposed to the toxic plumes, they mutated with whatever DNA was around them, and that happened with the creatures too, like the centipede that mutated into a branch-like creature or the tree roots that mutated into a crab-like creature, Sita explains.Stand-ins assisted in getting the proper interaction with the mutants. For the Alpha, the main Hollow Men, we had an actor in prosthetics and had to replace most of his body to alter the proportions, but we kept some of the expressions, Sita states. The other Hollow Men were played by stunt performers wearing a gray suit with markers, which was a base because we had to swap quite a lot of the action with the CG version. We did a lot of mocap and animation to get the performance. The horse and centipede were keyframe animated based on real-life references, then taking in account the physics of this new character. The horse is hollow and has all of this vegetation growing inside of it, so youre going to keep it close to reality but also have a new way, such as, This should feel lighter. Were going to change the tail and mane slightly so we can imply its not a full grown and pristine horse. The Hollow Men had to appear to be hollow without raising questions as to how they could be alive and deliver a human-like performance. We never show that story, but the idea is that there are operational organs, though most of the body is vegetation or fungi or trees or bushes. There were default expressions which consisted of sad, angry and irritated. We did mocap for some of the sequences of the Hollow Men while most of the stuff involving climbing the ropes or running with the horse were purely hand-animated, Sita says.A series of explosions are set off in an effort to destroy the gorge.For the Alpha, the main Hollow Men, we had an actor in prosthetics and had to replace most of his body to alter the proportions, but we kept some of the expressions. The other Hollow Men were played by stunt performers wearing a gray suit with markers, which was a base because we had to swap quite a lot of the action with the CG version. We did a lot of mocap and animation to get the performance.Joao Sita, Visual Effects Supervisor, FramestoreBeing humanoid in nature did not make the design and animation process any less complicated. Even though they have arms and legs similar to human body shape due to the branches and construction of the body, there was a lot of massaging to make it work, Sita explains. If you had a branch sticking out off of their shoulder and they turned their head, youre going to be hitting that. There were limits that we had to put into what the performance can do, or do we take the branch and push it with the head, or do we change the position of the branch so its never in the way? It was rather interesting because we did a lot of design for the characters. When you get a design approved and everybody loves it, youre just seeing a frame of that character outside of context. As soon as you build an arm made out of branches and try to bend it, you go, Where do we have a joint? Do we make the forearm a separate piece of branches that intertwine with the elbow area? Whereas, Groot in Guardians of the Galaxy has bark that makes him more like a tree trunk, the Hollow Men are more like vines and branches. Costumes were hard to get right. Sita adds, A uniform dressed on a pile of branches or an uneven body, if it is not done in a certain way, will look odd because youll be getting pieces of fabric going through holes or the fabric might not fly with the wind or push against the body enough. There was a lot of back and forth to get settings that would work for the underlying surfaces of the costumes.Practical hybrid trees were constructed on set, which were then digitally augmented.The main challenge was how can you make that explosion feel like theres enough scale to destroy the gorge and hint that its going to happen but end the shot without telling. We chose a framing that was a wide angle looking slightly up so you could see the clouds above, and as soon as the ignition happens, it illuminates all the wall and clouds above, showing that this is going to be massive. As soon as it starts to progress, we got the shockwave, and everything turns into dust as its coming forward.Joao Sita, Visual Effects Supervisor, FramestoreMuzzle flashes and explosions were entirely CG with interactive lighting added to the characters. We were replacing much of the surroundings, so getting the interactive lighting into the fog and environment was simpler, Sita states. For the stuff happening above the gorge when theyre shooting the turrets, that is all CG muzzle flashes and recreation of interactive lighting. The towers were built in CG where they interact the most. The lookout was in-camera but the exterior was replaced. An all-terrain vehicle drives up the wall of the gorge. When Levi was ziplining between the two sides of the towers and the cable breaks, most of the cable is on one side of the wall because it didnt break in the middle. Thats why Drasa and Levi have to reach the east side. They attach the broken cable to the winch of the jeep, then pull as if you were pulling a vehicle out of mud. There was a lot of discussion as to how far from the wall it would go. Would that winch and cable support the weight of the vehicle? This is obviously cinematic magic. We have a couple of shots where you can tell that the tires are operational and touch the wall to help the winch. There is a side-to-side motion that makes it feel like the cable isnt fully attached. They shot a lot of the action with the jeep hanging on a set.The Hollow Horse in action.The most dangerous of the Hollow Men is the Alpha.Framestore produced 516 shots that focused on the environment and creature mutations caused by secret labs accidentally unleashing a toxic fog.Anya Taylor-Joy and Miles Teller in The Gorge. The VFX work was divided between Visual Effects Supervisors Pete Dionne in Vancouver and Joao Sita in Melbourne. Erik Norby was Production VFX Supervisor. (Photos: Laura Radford. Courtesy of AppleTV+)Anya Taylor-Joy and Miles Teller in The Gorge. Framestore visual effects artists, as well as its art department and pre-production team, were involved in creature creation, landscapes and atmospherics. (Photos: Laura Radford. Courtesy of AppleTV+)A series of explosions are set off in an effort to destroy the gorge. We did the initial ignition down the gorge, and the camera in our shot was too close to tell the story of the collapsing environment, Sita remarks. From that initial ignition and explosion, we could pick elements that will need to be destroyed or fractured, but we wouldnt have to collapse walls. There wasnt enough time in the shot for that. The main challenge was how can you make that explosion feel like theres enough scale to destroy the gorge and hint that its going to happen but end the shot without telling. We chose a framing that was a wide angle looking slightly up so you could see the clouds above, and as soon as the ignition happens, it illuminates all the wall and clouds above, showing that this is going to be massive. As soon as it starts to progress, we got the shockwave, and everything turns into dust as its coming forward. Between our shot down in the gorge, they put one shot from above the church, which Drasa and Levi go through as its collapsing. That helped show how far we were from the village and that the explosion has enough power to destroy it all.0 Comments ·0 Shares ·86 Views
-
BLUEBOLT CONJURES HAUNTING GOTHIC VISUAL EFFECTS FOR NOSFERATUwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Blue Bolt and Focus Features.If there was a contemporary director who could feel right at home being transported back to the silent era, it would be Robert Egger with his hauntingly and beautiful visuals captured on 35mm film by his go-to cinematographer Jarin Blaschke. Brought into the inner circle to enhance the imagery with CG is Angela Barson and BlueBolt, which shifted from the Vikings of The Northman to vampires in Nosferatu. The goal was to digitally match the dark gothic aesthetic when producing 253 shots encompassing 90 of 132 minutes of screentime that feature a city, castle, canal, monastery, stormy seas, blood and gore, and plenty of rats! Adding to the complexity were shots lasting up to three minutes and consisting of 800 frames that needed to be stitched together.Everyone has been talking about the beautiful production design, the incredible cinematography, the amazing locations and the thousands of real rats; there has been little-to-no mention of the CG. I find it the ultimate compliment. If the visual effects work doesnt standout and everything is thought to be real, then weve done our job well.Angela Barson, Production VFX SupervisorShadows are a visual motif that emphasizes the vampiric threat and presence.Robert Eggers and DOP Jarin Blaschke storyboarded the entire movie, which helped with planning, states Barson, Production VFX Supervisor on Nosferatu. There was a huge number of visuals for the look and mood of the film plus concept art and drawings from the art department for all sets and locations. The visuals were extensive. BlueBolt provided some previs for a few of the bigger CG shots to help design the shots and inform everyone of what we needed to shoot to create the final result. BlueBolt was the main vendor while Atomic Arts handled some complex clean-up shots, and quick fixes were done by an in-house compositor. The eerie atmosphere of Nosferatu came from the blend of production design and Jarins cinematography. We shot with old Dagor and Baltar lenses, which had a specific vintage look. Scotopic filters were used for the night work, so there was almost no red in the plates, unless there was fire. The night work, interior and exterior, was hard moonlight, and sky replacements were needed on the exterior shots to give a lighter sky rather than the pitch black captured on camera. For all the full CG shots where we had no plate photography to match to, Jarin was involved in the lighting setup and made several visits to BlueBolt to help direct these shots.Atmospherics such as dust and smoke were important to be able to integrate digital set extensions into practical plates.The sky needed to be clear yet ominous, with striking backlit clouds. To bring the city to life, or rather to underscore its eerie stillness, we added subtle atmosphere, chimney smoke rising from a few houses, debris scattered across the streets, and empty boats drifting in the canals. The streets themselves were completely devoid of people, amplifying the unease. As this was night-time, we also matched Jarins technique of using a Stoptic filter to remove red light, which added to the shots haunting monochromatic feel. All these elements combined to create a balance of historic realism and an unsettling, otherworldly mood.David Scott, VFX Supervisor, BlueBoltExtensive research and scouting were conducted by Production Designer Craig Lathrop with the central environment being the city of Wisburg. The visual effects team later went back to photograph and LiDAR scan all the suitable buildings in Germany and the castle in Romania so BlueBolt could create a library of CG buildings, Barson remarks. BlueBolt created a full CG city, which included recreating the practical backlot builds. We placed virtual cameras at all the main story locations so we could make sure the layout would work for all key views. The visual relationship between the Manor House, the Hutters House, the canal and Wisburgs main street needed to work across the movie. Inspiration for the fictional German city came from Lbeck, Stade and Gdas. These cities informed the layout of the winding streets, the architecture and the estuary, grounding Wisburg in a tangible, historical reality, states David Scott, VFX Supervisor with BlueBolt. We were fortunate to work with some fantastic concept art from Craig, whose work really captured the unsettling and atmospheric tone that Robert Eggers envisioned for the city. His designs helped set the look and feel of Wisburg, balancing the eeriness with a sense of authenticity.To add life to the city, BlueBolt created empty boats drifting in the canals.Interior scenes took place in locations or set builds and required a minimal amount of digital work. The exterior streets of Wisburg was a large backlot build, which required extending upwards and into the distance, Barson notes. The Transylvanian village was a large set build on location. We used locations for the cemetery, forests, shoreline and several others. We tried to avoid using greenscreens whenever possible, and sometimes when wed like to use them, it wasnt possible. Often, they werent viable due to the amount of camera travel in the shots. An iconic shot is the hand shadow traveling over the city. Jarin envisioned the scene as moonlit, with hard, dramatic shadows cast across the city, Scott states. The sky needed to be clear yet ominous, with striking backlit clouds. To bring the city to life, or rather to underscore its eerie stillness, we added subtle atmosphere, chimney smoke rising from a few houses, debris scattered across the streets, and empty boats drifting in the canals. The streets themselves were completely devoid of people, amplifying the unease. As this was night-time, we also matched Jarins technique of using a Stoptic filter to remove red light, which added to the shots haunting monochromatic feel. All these elements combined to create a balance of historic realism and an unsettling, otherworldly mood.BlueBolt created a full CG city, which included recreating the practical backlot builds.A Gothic-Renaissance castle in Hunedoara, Romania was the basis for the dwelling of Count Orlok. We scouted Corvin Castle early on with the idea of filming there, but it was being renovated and becoming too pristine, Barson states. All of it would have needed to be changed by the art department and visual effects, so in the end the interiors were all built as practical sets on the stages, the courtyards were locations and the exteriors were fully CG. We scanned and photographed Corvin Castle to use as a base, then aged and weathered it to give the desired level of decay. The surrounding environment, including the river at its base, was fully CG. When we created the fully CG castle shots, Jarin was integral in the framing and lighting of the shots. The lenses had been profiled so we could recreate the look precisely. The shots of the small boat traveling up the canal in Wisburg was also a fully CG environment. The boat and people were shot on the backlot with reflective boards on the ground to capture reflections for the CG water. The buildings on either side of the canal used buildings from our general Wisburg city assets. The monastery was based on a real one that we werent able to visit; it was created fully in CG with no reference photography. The manor house exterior was also fully CG. We photographed other buildings in the style we wanted to use for textures.Wisburg gets overtaken by a massive rodent infestation. The rats were a mix of live action and CG, Barson reveals. We had one to two thousand real rats on set, some of which were trained. We tried wherever possible to have real rats in the shot and, ideally, closest to the camera so that the most clearly seen rats were real. This approach worked a lot of the time, but sometimes it was easier to remove all the real rats and replace them with CG, rather than trying to interweave CG with the real ones which would have meant a lot of roto work. Technical precision had to be balanced with creative storytelling. We used Golaem to handle the large swarms of rats, Scott explains. To ensure their movements felt chaotic yet natural, we created around 60 different animation cycles, covering a variety of behaviors and speeds to provide plenty of variation within the swarm. Once we had the overall look and feel working, we went in and fine-tuned individual rats within the crowd. This allowed us to adjust their placement, swap out animations or remove specific rats to refine the composition of the shot. By combining these procedural tools with manual adjustments, we were able to craft swarms that felt alive, unpredictable and convincingly integrated into the unsettling world of the film.Skies played an important role in establishing the proper tone for shots.Not everything happens on land. Creating the stormy seas for Nosferatu was a complex challenge, both technically and artistically, Scott notes. In total for the voyage, we had three fully CG shots and one that needed to be integrated with live-action footage. The biggest challenge in the CG shots was achieving a stormy ocean that felt vast and believable. To do this, we had to layer in a high level of detail of violent waves, mist blowing off the crests and gusting rain to capture the chaotic energy of the storm. Each element had to work together to create depth and realism. For the live-action shot, integration was particularly tricky. We had to carefully match the lighting and the intensity of the rain already present in the plate to ensure a seamless blend between the CG water and the practical elements. Balancing all these factors was key to making the ocean feel like a living, relentless force within the world of the film. All of the shots were backlit with the moon either in frame or just out of view at the top. This lighting setup allowed us to create dramatic skies filled with richly textured, backlit clouds that added depth and atmosphere to each scene, Scott remarks. The skies played a significant role in storytelling as the voyage progressed. Initially, they reflected a sense of uneasy calm, with pastel hues of sunrise framing the Empusas departure. However, as the plague began to take hold of the ship and Orlok claimed his victims, the skies grew darker and more ominous.The eerie atmosphere of Nosferatu came from the blend of production design and Jarins cinematography. We shot with old Dagor and Baltar lenses, which had a specific vintage look. Scotopic filters were used for the night work, so there was almost no red in the plates, unless there was fire. The night work, interior and exterior, was hard moonlight, and sky replacements were needed on the exterior shots to give a lighter sky rather than the pitch black captured on camera.Angela Barson, Production VFX SupervisorThe bridge was a digital creation.The most difficult rat scene takes place in the chapel. The actors had to wade through a swarm of rats, using their flaming torches to keep them at bay, Barson remarks. We created some practical molded rat mats and had a load of model rats that we used to cover the floor. The actors had to pick their way round the mats and model rats, giving them something to react to and they also provided a very good lighting reference. It also meant if you saw through a gap in the CG rats, youd be looking at the rat mats rather than bare flooring. Lighting the swarms of CG rats to match the moving firelight was especially challenging and took many painstaking rounds of lighting and compositing to make it seamless. Orloks hand shadow traveling over Wisburg was also complex to execute. It was one of the few shots in the film that used a more obviously non-practical camera move, Barson notes. The entire Wisburg town and its surrounding environment is seen in this shot. The shot consists of 958 frames. The shot gives the audience plenty of time to take in every element, meaning we couldnt cut corners, Scott states. Everything had to hold up to scrutiny. It also had to blend seamlessly with the real lighting and never feel CG. Achieving that level of realism while maintaining the eerie, unsettling atmosphere was a huge challenge, but, ultimately, it was one of the most rewarding aspects of the project.A ship is digitally added to make the frame subtly more interesting.For the sea shots backlighting was favored.Bluescreen was also utilized to get the proper scope for shots.Inspiration for the fictional German city of Wisburg came from Lbeck, Stade and Gdas.As the vampire threat becomes more prevalent, the skies take on a threatening appearance.The shadow hand effect lasts for 958 frames and was one of most complex shots to execute.Swarms of rats consisted of practical and digital rodents.There was no holding back on the blood and gore.Visual effects played a supporting role. Everyone has been talking about the beautiful production design, the incredible cinematography, the amazing locations and the thousands of real rats; there has been little-to-no mention of the CG, Barson states. I find it the ultimate compliment. If the visual effects work doesnt standout and everything is thought to be real, then weve done our job well. BlueBolt is proud to be part of the Nosferatu. Every aspect of the visual effects was a true team effort, and everyone involved did an amazing job, Scott notes. From the large-scale CG environments to the intricate atmospheric details, every shot was crafted with such care and precision. It was an honor to be part of bringing Robert Eggers vision to life. The hard work did not go entirely unnoticed. I have nothing but praise for my collaboration with BlueBolt, Eggers remarks. Their trademarks are photorealism, naturalism and subtlety, not maximalism. Their attention to detail is impeccable as well as the drive to never finish until the work is as close to perfection as possible. Above all, BlueBolts commitment is to telling great stories. I look forward to working with them again on the next one.0 Comments ·0 Shares ·85 Views
-
SENNA FAST-TRACKS BRAZIL ON THE GLOBAL VFX CIRCUITwww.vfxvoice.comBy CHRIS McGOWANImages courtesy of Miagui and Netflix, except where noted.For Overall VFX Supervisor Marcelo Siqueira, Netflixs mini-series Senna was a chance both to pay tribute to the Brazilian Formula 1 racing legend Ayrton Senna and to put Brazil on the global VFX map. The $170 million production was reportedly the most expensive series in the nations history and involved 2,089 visual effects shots across six episodes and six different VFX vendors.The industry had been buzzing about this project for years, Siqueira says. After all, were talking about Brazils greatest icon, recognized worldwide, and one of Netflixs biggest bets in Latin America. Recreating the most emblematic races of the greatest Formula 1 driver, set in racetracks around the globe, was a dream for any VFX professional. The mini-series was produced by Netflix and Brazils Gullane Entretenimento. Gabriel Leone stars as Senna, a three-time Formula 1 champion, who passed away at 34 in a 1994 race crash. Vicente Amorim and Julia Rezende directed.For the mini-series Senna, Brazilian creative production studio Miagui helped re-create Ayrton Sennas Monaco Grand Prix races with CGI.Siqueira notes, As the project advanced, Scanline came on board as a creative and management partner, and they, along with Gullane, chose the collaborating vendors. In the end, we were delighted to see that three of the six [VFX] companies that contributed to the project were Brazilian, delivering work on a par with international vendors like Scanline [VFX] and NetFX. Craig Wentworth, Scanline Overall VFX Supervisor, explains, Our largest vendor was actually NetFX, a global network of talent working under the Netflix umbrella. That team contributed 725 shots to the series, focusing on set extensions, burn-ins, clean-ups and makeup fixes. Scanline completed 390 shots, most of which involved racing sequences at Suzuka, Japan and Interlagos, Brazil.Brazilian racing champion Ayrton Senna (Gabriel Leone) was the focus of VFX work by Scanline VFX, NetFX and Eyeline Studios along with Brazilian studios Miagui, Quanta and Picma Post.Quanta, Picma Post and Miagui were the three Brazilian vendors, Wentworth explains, Quantas contribution covered virtual production, online and color services and [they] also provided 488 shots, handling the majority of burn-ins and full-screen FIA footage restoration. Picma Post provided 279 shots of Formula Ford and F3 races in Episodes 1 and 2, and pit lane set extensions for Imola, Italy, in Episode 6. Siqueira notes, Picma Post was tasked with the iconic 1994 Imola race, tragically marked by Ayrton Sennas fatal accident. Picmas set extensions added elements such as additional cars and digital audiences to the filmed scenes, ensuring a faithful and impactful recreation of these historic moments. Miagui had 94 shots and was the primary vendor for the Monaco 84 and 88 races.Eyeline Studios was a key consultation partner for our virtual production effort and was responsible for scanning our production vehicles for 3D recreation. They also created several all-CG racing shots in Unreal Engine for us, a first in Scanline and Eyelines long history, says Wentworth.Monaco Grand Prix 84 and 88The two Monaco Grand Prix races in 1984, marked by torrential rain, when Senna drove a Formula 1 car for the first time, and 1988, when he collided with the guard rail at the tunnel entrance, were assigned to Miagui. Siqueira comments, The company was responsible for fully recreating the track scenes in full CGI. We knew that the work on Monaco 1984 would build upon the foundation set for Monaco 1988. Everything constructed for M88, including buildings, grandstands, props and crowds, would be repurposed for M84, with an added layer of complexity due to the rain. This included advanced fluid simulations for wet conditions and reanimating the cars to reflect their distinct behavior on a slippery track compared to a dry surface.Produced by Netflix and Brazils Gullane Entretenimento, Senna recreated the most emblematic races of the great Formula 1 driver, set in racetracks around the world. The mini-series utilized 2,089 VFX shots.Continues Siqueira, Miagui used a simplified model of the 1984 Monaco circuit as the base for their work, as the track layout back then differed significantly from the modern version. A dedicated team focused exclusively on constructing the circuit environment, continuously referencing archival footage and photographs from the era. Their meticulous attention to detail enabled them to recreate the historic Monaco circuit with stunning accuracy, ensuring it authentically reflected the time and atmosphere of Sennas legendary performances.Archival FootageQuanta Post was responsible for a significant volume of burn-ins, the restoration of archival footage from the era and the virtual production content. The series historical foundation relied heavily on FIA archival footage, which needed to be seamlessly integrated into the visual language in an organic and authentic manner. Siqueira explains, To achieve this, the treatment of the footage displayed on each monitor or TV accounted for the broadcast year, whether the transmission was over-the-air or via cable, and the type of monitor used at the time. This approach enabled the precise recreation of the conditions in which each character experienced the races, whether at home, in bars or within pit boxes and broadcast booths. Over 400 burn-in shots were meticulously crafted, ensuring historical accuracy and visual cohesion throughout the series narrative.Miagui was responsible for recreating the track scenes in full CGI. The two Monaco Grand Prix races in 1984 and again in 1988 were marked by torrential rain, soaking the track. Advanced fluid simulations for wet conditions and reanimating the cars were used to reflect their distinct behavior on a slippery track.Miagui used a simplified model of the 1984 Monaco circuit as the base for their work, as the track layout back then differed significantly from the modern version. A dedicated team focused exclusively on constructing the circuit environment, continuously referencing archival footage and photographs from the era. Their meticulous attention to detail enabled them to recreate the historic Monaco circuit with stunning accuracy, ensuring it authentically reflected the time and atmosphere of Sennas legendary performances.Marcelo Siqueira, Overall VFX SupervisorSuzuka and Interlagos RacesScanline was tasked with recreating the iconic races at Suzuka (1988, 1989 and 1990) and the historic 1991 Interlagos race, where Senna claimed his first victory in Brazil. These sequences required a blend of advanced techniques to authentically capture the emotion and realism of these legendary moments. Siqueira comments, NetFX and other vendors focused on clean-up tasks and simpler compositions, ensuring visual consistency and supporting the finalization of scenes that complemented the main narrative.Wentworth recalls, I was first introduced to the production team in February 2023 as a representative of Scanline VFX. Our earliest conversations centered around the challenges the show faced in world-building specifically, the several racetracks that would need to be created digitally, either in full or as set extensions and the unique challenge of visualizing what Vicente described as Sennas superpower. Given Scanlines long history in producing complex and creative visual effects, we seemed like a great fit for the series as a whole. In May that year, I began consulting with Sica [Siqueira] on many aspects of the VFX production. Later, as Scanlines relationship with the production evolved, my [VFX] Producer partner, Vero Lauzon, and I assumed responsibility for the distribution and execution of the final VFX in the show. We also supported the LED shoot planned for early December with support from our counterparts at Eyeline Studios.Several racetracks that were landmarks in Sennas career were re-created digitally, either in full or as set extensions. A dedicated team focused on constructing the circuit environment, referencing archival footage and photographs from the era to help recreate the historic Monaco Grand Prix circuit in detail. (Images courtesy of Scanline VFX and Netflix)The treatment of the footage displayed on each monitor or TV accounted for the broadcast year, whether the transmission was over-the-air or via cable, and the type of monitor used at the time. This approach enabled the precise recreation of the conditions in which each character experienced the races, whether at home, in bars or within pit boxes and broadcast booths. Over 400 burn-in shots were meticulously crafted, ensuring historical accuracy and visual cohesion throughout the series narrative.Marcelo Siqueira, Overall VFX SupervisorSubtle EffectsSenna was not a series that sought flashy or noticeable visual effects. Siqueira comments, The entire cinematic language was carefully planned to avoid the risk of distracting the audience and making them wonder how a particular scene was filmed or created. We conducted numerous tests with lenses, speeds and depths of field to establish a unique visual standard for the series. The goal was to ensure that regardless of the scenes origin whether shot with our real cars or created by vendors like Miagui or Scanline all shared the same camera movements and framing characteristics. This uniformity was essential to maintaining immersion and aesthetic consistency.Grand Prix raceway shots with CGI extensions. (Images courtesy of Picma Post and Netflix)Wentworth adds, Beyond the racing, which can certainly be considered the VFX highlight of the series, there are a lot of visual effects in Senna that people will not know are VFX. One of my favorite shots is something as innocuous as a newspaper insert where we had to add a photograph to it later, but it is so perfectly executed that you cannot tell it was done in post. And we have hundreds of other examples just like that, of things that were added after the fact because they werent quite ready for the shoot but were essential to the narrative.Race TechniquesSeveral techniques were used for the race shots. Siqueira remarks, In January 2023, during the third month of preparation, I presented a proposal for producing the race sequences using four distinct techniques. Given that we had 22 faithful replicas of cars and access to real tracks, the primary technique was set extension, which became the most widely used. The cinematography required shallow depth of field on the characters, making close-ups essential. Inside helmets and balaclavas, the actors had only their eyes to convey emotion, which made the use of virtual production indispensable. For wide shots, the solution was to work with full CGI. The fourth technique addressed the cars themselves: although we had 22 replicas, each race featured only two complete cars, usually Sennas and his main rivals. For all other vehicles, digital creations would be necessary.We animated very specific vehicle action around our hero car so it really felt like Senna was in the middle of a race. To really enhance that feeling, our cars were mounted to a motion base that added movement to the vehicle based on its track position. In some respects, we created the ultimate racing simulator for those few weeks.Craig Wentworth, Overall VFX Supervisor, ScanlineTestsSiqueira adds, With this plan in hand, we moved on to a proof of concept. I traveled to Argentina with Cinematographer Azul Serra and Executive Producer Caique Ferreira for two days of filming in Balcarce [Argentina]. There, we collaborated with the team from Crespi [sports car manufacturer], responsible for constructing and piloting the cars, to test the possibilities and validate the proposed strategies. Wentworth adds, In the end, for racing scenes, we knew we would need to employ every trick in the book. The physical cars were filmed by a performance unit, led by [second-unit] directors Rodrigo Monte and Cory Geryak. They captured countless hours of these cars racing around three different tracks in Argentina and Uruguay. Depending on what footage was used in what section of a race, our job in VFX was to ensure continuity to the action by adding appropriate race-track set extensions in the background.Numerous set extensions were used throughout the Senna mini-series. (Images courtesy of Netflix)Naked Carsand 3D EnvironmentsSome of the physical cars filmed had no bodies. Wentworth explains, These naked cars, as we called them, were designed to stand in for any vehicle for which Crespi had not built a body. For example, in the Suzuka 88 race, Senna makes a spectacular maneuver at the circuits hairpin turn, passing two cars in quick succession. Only Sennas McLaren was 100% practical. The other four cars in that sequence of shots were just chassis that Scanline 3D-tracked and added car bodies to in CG, along with the requisite set extensions in background.Several full CG shots covered action that had been missed or felt missing in editorial, or that could not be filmed by the performance unit. For these, it was necessary to create full 3D environments; for example, at Suzuka and Monaco, incorporating crowds, marshals, signage, every historically accurate detail you could think of, and, in some cases, a full field of CG cars.Onboard ShotsWentworth says, All of this was combined with onboard shots of our actors filmed on an LED stage over a two-week period at the end of main unit photography, footage that really became the backbone of racing scenes in editorial. Onboard driving shots were filmed at Quanta in So Paulo. Wentworth continues, With Scanlines guidance, the Quanta team put a lot of effort into leveling up our previs environments, which had been created in Unreal Engine. That meant adding lots of model detail to match what the art department had created on location for all of our races, as well as an overall lookdev pass at track level. Wentworth adds, We animated very specific vehicle action around our hero car so it really felt like Senna was in the middle of a race. To really enhance that feeling, our cars were mounted to a motion base that added movement to the vehicle based on its track position. In some respects, we created the ultimate racing simulator for those few weeks. It was all CG around Senna when we were close up with him, particularly in the Formula 1 races, continues Wentworth. But that CG was very cleverly filmed and, with the exception of adding camera shake in online, not a single VP shot was touched by VFX, so we got real value for money from that material and that filming approach.Onboard shots of the drivers were filmed on an LED stage for two weeks at the end of main-unit photography. (Image courtesy of Picma Post and Netflix)AccuracyOne of the most inspired choices made by Vicente Amorim, our director and Showrunner, was to intercut our version of events with actual footage provided to us from the FIA, Wentworth says. That meant, at any given moment, our VFX would be intercut with footage of what really happened, typically in a television broadcast, 30 years ago. So, in terms of maintaining historical accuracy, there was really nowhere for us to hide in VFX. To that end, the production itself and all our vendors spent countless hours scouring the FIA material for reference to how sets should be dressed, what people were wearing, what advertising signage was on display at what corner of a track, and so on. Our virtual environments reflected that effort.Monaco represented a unique challenge. Wentworth states, Due in part to time and practical constraints, we made the decision very early in the process to commit to creating the Monaco races in CG. The team at Miagui put exhaustive effort into recreating Monaco based on photographic and filmed reference. Fortunately, we knew, based on the extensive previs Sica and his team had created for the series, exactly which sections of the track were going to be filmed the most, and so modeling and lookdev efforts were concentrated there and kicked off early to give us plenty of time to bring that world to a very photoreal level. Miagui also created digital versions of our production cars so that our CG twins would look and feel just like the practical cars, which was important for visual consistency.22 faithful replicas of cars were driven by Senna and his rivals, but each race only featured two cars at a time. (Image courtesy of Crespi and Netflix)AssetsIn addition to the extensive CGI construction of elements surrounding the tracks, hundreds of assets were created, including cars, cranes, guardrails and characters, according to Siqueira. For the crowd elements, we developed an extensive sprite library during filming. Over 500 extras were captured in 12 different positions using six cameras, performing various actions. This process generated more than 36,000 unique options, enabling vendors to request and utilize these resources as needed. To manage this material, we built a comprehensive database that was continuously updated by all departments throughout the project. This system became an essential tool in post-production, ensuring that everyone had access to the necessary information and resources to meet the precise demands of each scene efficiently and effectively.[A]t any given moment, our VFX would be intercut with [with actual footage provided by the FIA] of what really happened, typically in a television broadcast, 30 years ago. So, in terms of maintaining historical accuracy, there was really nowhere for us to hide in VFX. To that end, the production itself and all our vendors spent countless hours scouring the FIA material for reference to how sets should be dressed, what people were wearing, what advertising signage was on display at what corner of a track, and so on. Our virtual environments reflected that effort.Craig Wentworth, Overall VFX Supervisor, ScanlineLiving Up to LegacyIt wasnt easy, but it was the most rewarding project of my career, Siqueira says. We spent eight months in preparation because we were recreating scenes that millions of people were familiar with, including a legion of fans who know every curve and detail by heart. Absolute fidelity was crucial, from the shape of each car, with its branding and sponsors, to the billboards around the tracks and iconic elements like the Casio Chicane at Suzuka, the Marlboro building at Imola and the water tower at Interlagos. These recreations had to seamlessly integrate with FIAs original footage, making historical accuracy even more critical.The drama of the race was reflected in close-ups of Sennas eyes. (Image courtesy of Netflix)Wentworth says, Speaking very personally, this project was a very new experience for me. It was the first time I have been creatively responsible for such a large volume of work. My VFX Producer, Vero Lauzon, and I not only had to manage the distribution and execution of the work but also forge new partnerships with international vendors we had never worked with before, guiding them through the process of producing high-value content of enormous scale.Wentworth adds, Senna is the most ambitious project to have come out of Brazil, and just by virtue of its subject matter, it came with its own built-in level of gravitas. Protecting and honoring Sennas legacy was very important to everyone involved in the VFX process, which in and of itself became its own challenge: Do we does the work live up to standards set by the man himself? I think we, our partners, our vendors and the entire team from Brazil ultimately surpassed expectations, and I am very proud of everyones work.0 Comments ·0 Shares ·117 Views
-
INSIGHTS FROM WOMEN IN VFX LEADERSHIPwww.vfxvoice.comby NAOMI GOLDMANWhat are the pathways for women to ascend to positions of leadership in the visual effects industry? How can we build the next generation of leaders? And what can our industry do as a collective to uplift and support women from diverse backgrounds?Mentoring and guidance are all a part of having a successful career, but at the core work your ass off. Chrysta Marie Burton, Senior Vice President Physical Production & Visual Effects, Paramount PicturesIn celebration of International Womens Day, we are proud to showcase our VES panel with four extraordinary women leading the charge in visual effects, who shared their insights and ideas on the state of women in VFX leadership. Lending their voices to this dynamic conversation: moderator Lisa Cooke, VES Board Chair Emerita; Janet Lewin, Senior Vice President, Lucasfilm VFX & General Manager, ILM; Chrysta Marie Burton, Senior Vice President Physical Production & Visual Effects, Paramount Pictures; and Kathy Chasen-Hay, Senior Vice President of Visual Effects, Skydance Productions.Lisa Cooke, VES Board Chair Emerita.Lisa Cooke: In reflecting on your vibrant careers, share with us your pathway to leadership in the industry and your origin story in the workforce.Chrysta Marie Burton: After graduating from USC Film School, I started in TV animation at Sony and Nickelodeon. But my life-changing event was when my resume landed on Janet Lewins desk at ILMand so it is amazing to be on this panel with her today. My first big opportunity was opening ILM in Singapore, where we built a team and grew it from the ground up. Back then, I think we would push women off the cliff before their wings started flapping and hoped they would fly. I was fortunate to have exceptional women role models and because of them, I learned how to help nurture the next generation.Janet Lewin: I just got my R2-D2 pin for being at ILM for 30 years, and Im so proud of this achievement. The visual effects field is always evolving with new venues for growth. I started in the purchasing department as a temp, and since I always aspired to be in the film industry, I was happy to get my foot in the door. I wove my way through the organization, always saying yes to opportunities. I sought out great mentors, and I could see myself in these women, who helped me to move forward.We should lean in to the unique strengths that women can bring to the job, as consummate multi-taskers and collaborators. Janet Lewin, Senior Vice President, Lucasfilm VFX & General Manager, ILMKathy Chasen-Hay: I started my career at KCET-TV and realized quickly that I wanted to be hands-on. As a photographer, I think I was always doing visual effects in some way. I was a traffic coordinator at a digital production house, then a VFX editor and compositor for half of my career. I transitioned into VFX producing as a mom with three young kids all as a self-taught VFX producer. All of the big moments in my career were thanks to women who were my champions. There are so few women VFX supervisors, so mentoring younger women is an integral part of my journey.Lisa Cooke: What other advice would you give to young women to advance their careers and to set their expectations about working in this field?Chrysta Marie Burton, Senior Vice President Physical Production & Visual Effects, Paramount Pictures.Chrysta Marie Burton: The reality is that these jobs are not easy. It can be glamorous when a project reaches completion, but it is hard to get there and it takes sacrifice. It is a competitive field and the best push through. Mentoring and guidance are all a part of having a successful career, but at the core work your ass off.Janet Lewin: Our structure on the facility side is very much based on how men work, so there are barriers for women, perceived or real. It means that women need to advocate and take a chance on themselves and rise above imposter syndrome and know what they really want. Ideally, companies should offer stretch assignments, part-time or job-sharing roles, because family responsibilities fall largely to women. We should lean in to the unique strengths that women can bring to the job, as consummate multi-taskers and collaborators. At ILM, I created a platform for women to convene and inspire each another, and figure out how to solve problems, together.Kathy Chasen-Hay: Very often, women go through their lives and careers trying to make people comfortable and happy, and apologize for their choices. I want to tell other women to unapologetically take their seat at the table, voice your opinions and own your power. Too many of us were told to not speak up, and it is especially challenging when you are the lone woman in the room. Assert your presence and your authority. I say you can have it all and you deserve it all.Janet Lewin: It is also about developing allies with men. It has been encouraging for me that a lot of men showed up at our ILM women forums to learn and to help. We need that meaningful collaboration to break patterns and forge new solutions.Janet Lewin, Senior Vice President, Lucasfilm VFX & General Manager, ILM.I want to tell other women to unapologetically take their seat at the table, voice your opinions and own your power. Kathy Chasen-Hay, Senior Vice President of Visual Effects, Skydance Productions.Chrysta Marie Burton: I advocate for women to speak first at the table and command that attention and authority early. We need to get comfortable with being uncomfortable. Not everyone has this innate skill set, to operate assertively in a system not built for women. So I try and teach those confidence-building soft skills. And I strategize when casting and crewing a show to factor in the lives of the professionals in our talent pool especially those with families to set them up for success.Lisa Cooke: Where do we stand today with women in the industry? What kinds of programs do you feel could help further advance women in the workforce and what are your companies doing? Janet Lewin: Everyone has to be on board with investing in the future. We need to formalize our vision and make it actionable. That means formalizing mentorships, apprenticeships, listening sessions, stretch assignments. At ILM, we are trying to break the catch-22 about not always hiring a supervisor who has been a supervisor before and just drawing from the same talent pool. We aim to be transparent about what it takes to be considered for a supervisor role and then as ask our producers and leaders to take chances and give opportunities to rising talent.Chrysta Marie Burton: Our supervisor pool is aging and we keep making the same calls. We have tried co-supervisor situations and look to create some flexible opportunities, where women with families can bring them on location. I know how to be a producer, because someone walked me through it the high-level responsibilities and the day-to-day details. We need a broad embrace of hands-on guidance for our next generation to grow our talent pool systematically.Kathy Chasen-Hay: I have taken more than a few project managers and brought them up to producer, because we I was willing to take a chance on them. Mentoring is so many things, including demonstrating how we look at things 360-degrees. Skydance is committed to diversity and giving women directors opportunities to ascend. Diversity of life experience and creative vision is good for the workforce and the health of our companies.Kathy Chasen-Hay, Senior Vice President of Visual Effects, Skydance Productions.Lisa Cooke: The strong messages from our women in VFX leadership are to: invest in and hire for potential; demonstrate and formalize mentoring programs; take chances as leaders and as aspiring practitioners; take your seat at the table and assert your voice; and forge alliances to build collaborative solutions to create a more equitable and diverse workforce.Watch theWomen in VFX Leadershipconversation here:https://youtu.be/U-dmc-iok_c?si=XAwss-4U7aKfQAst0 Comments ·0 Shares ·75 Views
-
RODEO FX REVS UP TO SPEED FOR SONIC THE HEDGEHOG 3www.vfxvoice.comBy TREVOR HOGGImages courtesy of Rodeo FX and Paramount Pictures.Picking up enough speed to do a trilogy of live-action movies plus a streaming series is the Sonic the Hedgehog franchise, which has been spearheaded by filmmaker Jeff Fowler and his trusted Visual Effects Supervisor, Ged Wright. Brought into the world of Sega is Rodeo FX, which had a blast animating the title character for Sonic the Hedgehog 3 as well as his cohorts Tails, Knuckles and new antagonist Shadow for 265 shots.Shadow was a fun character to do. We didnt have the Keanu Reeves voice at the beginning. At some point, it came in, but we knew it would be him, so we were looking into Keanus behavior. Shadow is stoic, and they referenced the Terminator. Another fun character was Knuckles because he wasnt that comedic in the previous movie.Graeme Marshall, Visual Effects Producer, Rodeo FXWith all of the flying debris, it was important not to take away from the performance of the characters.When we started playing with Sonic, he was overly cartoonish, explains Sbastien Francoeur, Visual Effects Supervisor at Rodeo FX. The face was squashed and stretched because the character allows us to do that, but at some point, they wanted something that had a bone structure so you couldnt deform it too much. The mouth is always slightly toward camera, but it cant go both ways. Theres a rule with the eyes. The one thats looking forward is always a tad smaller than the other one. It was interesting and well managed because we were dealing with the Animation Supervisor on their side, Clem Yip, along with Ged and Jeff.The black hole was supposed to be nightmarish, so tentacles were incorporated into the design.All of the character assets were shared by the various vendors. This is the first time we ingested a rig because usually when we ingest the asset; its the model, the groom and texture, and were building up the rig, Francoeur states. I was glad that we could jump in that project with the right thing. It was harder at the beginning because we had an issue here and there. A new workflow was implemented for lighting. Before, we were using Katana, but we decided to do the lighting and effects in the same software, which is Houdini. Environments were also done in Houdini. USD is a file format that has eased the exchange between studios, so we were using that as well. Each vendor has a different way of working. The challenge at the beginning of the show was putting all those proprietary tools aside and making these assets be the same from facility to facility, like the certain facial expressions and movements that are specific to particular characters, notes Graeme Marshall, Visual Effects Producer at Rodeo FX. Being an established IP, we didnt do anything too far off the map.Achieving the desired look for the vegetation and mountains was a creative journey.The effects and lighting were done entirely in Houdini, which is a new workflow for Rodeo FX.[T]he black hole was something that needed to developed. We started with some concepts with the black hole, and Jeff [Fowler, director] wanted it to be more nightmarish. One guy on our team had an idea, showed it to me, and we tweaked it. All of the tentacles made the black hole more nightmarish, and we used a lot of distortion behind it to feel like theres some kind of warp.Sbastien Francoeur, Visual Effects Supervisor, Rodeo FXThe client had their own team of character builders and riggers, and we had access to them, Marshall continues. If we had a problem, we could poke them and ask technical questions. They also set up a library of key facial expressions that was shared with the vendors, so when Sonic smirks its going to be that exact smirk that you would expect from Sonic. Working with Clem was nice because he comes from the facility side. Clem understands the process and was able to work pretty much directly with our artists to stage the blocking. Ged and Clem were super collaborative on this project. Some weird physics were involved with the antagonist. Marshall notes, Shadow was a fun character to do. We didnt have the Keanu Reeves voice at the beginning. At some point, it came in, but we knew it would be him, so we were looking into Keanus behavior. Shadow is stoic, and they referenced the Terminator. Another fun character was Knuckles because he wasnt that comedic in the previous movie.For comedic reasons, the mountaintop destruction shot lasted three seconds.The shots [of the black hole sequence] are a testament to the collaboration between our departments internally. Youve got environment, effects, CFX, animation, and our lighting team had to do all the lighting interaction with the lightning.Graeme Marshall, Visual Effects Producer, Rodeo FXRodeo FX has developed a reputation for producing monsters, but the hedgehogs were a somewhat different endeavor. I feel like a monster might be more technical because its about the weight, Francoeur remarks. In this case, we started with a voice-over so you get the idea when you listen to it, then you build on top of that. You try to put the accent where the joke can work. There is another distinction. When you have someone reacting to the monster, you create the performance around their reaction, Marshall states. Whereas, these characters have to emote and perform to tell the story. Special attention was paid to the eyes. We were art-directing those eyes to make sure theyre not looking dead or flat or too cartoony, Francoeur notes. Part of their realism was putting a nice, detailed reflection in them.The look of the Chaos energy reflects the emotional state of Shadow.There was plenty of reverse engineering. A lot of the work we did with Ged and Jeff was looking back at how they shot it, Marshall states. Youd put a bounce card to give a little bit more of a soft wash on the surface of someones face and in their eyes. There were often times where we ended up doing that in CG to try to give that realism and emotion as well as additional hints of light, which you wouldnt normally get, that they do on set. A signature character effect is the Chaos energy which is electrical in nature. We received a base recipe and needed to learn the aesthetic, such as its parameter and curve, Francoeur explains. Theres intention behind the Chaos energy in that its always linked to the emotional state of the character. We were playing with proper physicality so that the lightning had the proper shape and was illuminating the character correctly.The Test Track Chamber appears in two different conditions. We tell the story of Shadow at the beginning of the movie and see him in the Test Track Chamber for 10 seconds, Francoeur states. The client wanted to see the operational and disused Test Track Chamber, so we had to run them in parallel. You need one to match the other, but at some point, we said, Okay, theyre similar enough, so we can split it up. The destroyed one has been 50 years without maintenance, and vegetation would have taken over. When the black hole opened, we needed to have huge interaction with everything. Honestly, that was a good challenge because it was not a linear workload. We didnt have enough time to say, Lets finish our environment and then were going to do the destruction. The vegetation had its own entire setup. You need a huge machine, and we dont have unlimited render power. One of the challenges was playing with the animation of those weeds moving because at some point the weeds looked like they were headbanging!Two versions of Sonic were created, with the mirror reflection being squashed and stretched.With all of the flying debris, it was important not to take away from the performance of the characters. There was a lot of balancing that we had to do, and we had a talented effects team, Marshall remarks. Weve got moving bushes, smoke, bigger chunks of debris, the black hole, and one-offs around environment, like air conditioning ducts. Then theres Tails with his little tuft of hair! Different development approaches were adopted for the portal and black hole. The portal is a ring that we ingested to make sure they all look the same, but the black hole was something that needed to developed. We started with some concepts with the black hole, and Jeff wanted it to be more nightmarish, Francoeur reveals. One guy on our team had an idea, showed it to me, and we tweaked it. All of the tentacles made the black hole more nightmarish, and we used a lot of distortion behind it to feel like theres some kind of warp. The sequence required the entire expertise of Rodeo FX. These shots are a testament to the collaboration between our departments internally, Marshall notes. Youve got environment, effects, CFX, animation, and our lighting team had to do all the lighting interaction with the lightning.The portal was an effect provided by the production that was ingested by Rodeo FX.When we started playing with Sonic, he was overly cartoonish. The face was squashed and stretched because the character allows us to do that, but at some point, they wanted something that had a bone structure so you couldnt deform it too much. The mouth is always slightly toward camera, but it cannot go both ways. Theres a rule with the eyes. The one thats looking forward is always a tad smaller than the other one.Sbastien Francoeur, Visual Effects Supervisor, Rodeo FXJim Carrey plays the roles of father and son, occassionally having to perform against himself. It was a basic split screen with no real overlap, Francoeur remarks. The one we had was fairly easy. We had to make sure that the ground was aligned because for each shot, they had a double and switched the character so the lighting fits everything. Production provided a solid foundation. Jeff shot us really nice plates and kept it as simple as possible, Marshall states. I dont think they did motion control, so to speak, but they certainly kept things locked off and tried to keep lighting the same for the two passes. Jim would have to go and get makeup done to look like Robotnik Senior. They would have to lock down the set and wait for that process to happen.Rodeo FX had to make sure that the ground was aligned for the split-screen shot where Jim Carrey plays Robotnik Sr. and Jr.Shadow is responsible for a shattering glass effect. They had a tank but didnt like their fracture, so we needed to remove that one and rebuild the background, Francoeur states. If you look into a reflection, youre going to see yourself squashed. Its not a full ray trace. Its a complete cheat. In that scene, we have two Shadows; one is by the camera and the other one looking at us. They are doing the same thing. A proper reflection is stretched and thin. It was not challenging, but we needed to make sure that it would work at the end of the day. We made the character squash and stretch a bit to help it out. Then we rendered the facets of the destroyed surface so we were able to offset that, so hes not at the same height. Its not a perfect reflection.A mountaintop collapses. We had three seconds, Francoeur laughs. It couldnt be longer because the joke wouldnt work. It was all about timing. We added some plate photography, but they were not happy with the mountains. It was shot in Iceland. The plate was looking great; however, there were no mountains to service it, and the vegetation over there was not the same as it would have been in Colorado.We struggled a bit, but at the end of the day it looks great.Rodeo FX created two versions of the Test Track Chamber in parallel where one was operational and the other dilapidated.Highly-detailed reflections were placed in the eyes of Tails to give the character more life.The rigging setup for the characters was provided by the production team to ensure they remained consistent no matter what vendor was creating them.Watch the VFX breakdown reel of Sonic the Hedgehog 3 from Rodeo FX, who worked on the destruction of the Test Track Chamber, which pulled everything in the environment toward the black hole created by Eggman (Jim Carrey) and Shadow (Keanu Reeves) from small pebbles and leaves to an entire mountain. Click here: https://www.youtube.com/watch?v=YxcP38TApOA0 Comments ·0 Shares ·138 Views
-
ADAM VALDEZ & MPC EMBARK ON A REMARKABLE JOURNEY FOR MUFASA: THE LION KINGwww.vfxvoice.comBy TREVOR HOGGImages courtesy of MPC and Walt Disney Studios.Considering Adam Valdez has been an instrumental collaborator on The Lion King, The Jungle Book, Prehistoric Planet and Mufasa: The Lion King, you could say that he has become the David Attenborough of CG wildlife. Working on Prehistoric Planet for the BBC was literally a David Attenborough narrated piece! laughs Adam Valdez, who directed some episodes in the series. Its true that were doing a lot around animals and the natural world.77 digital sets were created that included the familiar, such as Pride Rock, with previously unexplored environments, such as Mountains of the Moon and the Tree of Life.It was a real important mission for our director, Barry Jenkins, to understand from the beginning what you can do with a lions face. How do I direct this? What limits are there? Is there a different language that we have to speak? We did a test sequence right at the top of the show to work on those questions.Adam Valdez, Production Visual Effects SupervisorBarry Jenkins had never helmed an animated feature before Mufasa: The Lion King. Valdez remarks, The Underground Railroad had a fair bit of visual effects, so Barry came into this with an understanding of what visual effects are, but making an animated movie that is a musical is a whole other beast. You have a combo pack of new things. Full CG is quite different than augmenting footage. For sure, he had a steep learning curve. The process itself is quite constraining, and there are a lot of demands on the director. It was challenging for him to get up to speed, but by mid-project he could see things and understand where we were in the process and could gauge where it would go. At the same time, Barry probably saw from a creative point of view that it is best for a director not to get too into the weeds; however, try to stay on top of it and maintain perspective, which is hard for any of us. Barry was a great creative collaborator and is passionate about the movies hes making. Thats always the best.The production team for Mufasa: The Lion King was aware of the legacy of The Lion King and the responsibility of doing justice to the story and characters. We love those animated films, however, to start with them would be a mistake, notes Valdez, who was the Production Visual Effects Supervisor on the prequel that explores the friendship between Mufasa and Taka, which deteriorates and turns the latter into the arch-enemy known as Scar. You have to look at the script on its own and ask, What does this story want to be? Within the script, there is so much connection already, so youre not at risk of departing the franchise. But the sense of responsibility that comes with it is to be thoroughly versed in the mythology of the world of these stories so you know when you might break canon. You are also being given license to author additional mythology because the piece is in some sense reestablishing some of the unspecified things that you might have little hints of in other movies. You have to know how it fits, but you cant be, Were not going to do this because theres no proof of it anywhere else. You have been given some license to be creative.A library of 5,790 assets was created that featured everything from trees and plants to grass species.A balance had to be struck for Mufasa: The Lion King where the behaviors did not break away from the characters animalistic nature but, at the same time, they could be emotionally recognized by the audience. It was a real important mission for our director, Barry Jenkins, to understand from the beginning what you can do with a lions face, Valdez explains. How do I direct this? What limits are there? Is there a different language that we have to speak? We did a test sequence right at the top of the show to work on those questions. One of the things that we had to do was to increase the fidelity of our faces at the geometric level. This was partly because we felt that having subtlety in the wrinkles and shapes was going to speak that normal human language of the face better. Great attention was paid to fur shading. The material definition of how fur shades and shows light and shadows was essential for a furrowed brow coming through. The animator could see a furrowed brow, but once you put all of the little fur on it, its going to dampen it. We had little things all over that helped us to subtly convey emotion.118 unique photorealistic animals were created to populate the world.Approximately 1,700 artists, supervisors and production crew situated in London, Montreal and Bengaluru produced 1,500 full CG shots for MPC. Audrey Ferrara, the Visual Effects Supervisor at MPC, ran the crew and did dailies across three time zones, and Im saying, We need this and that, Valdez remarks. But part of the function of being the production visual effects supervisor is the consistency. You have a delegation tree of these people, who are the craft experts, and then they have leads leading smaller teams. It takes time on these big movies for the lessons to permeate down through those layers of teams, and usually you get gifts bouncing back up because individual artists produce cool, inspiring ideas that might be unexpected. Then, eventually, the director is finding the combination of things that work. It was a lot of trying to replicate your successes.Like with all filmmaking, you are art-directing everything. There are aesthetic choices. With animals, its a lot of negative space. Animals are busy movers until they pause because theres a moment of focus or concern. You can use it for dramatic effect. Its an anthropomorphic merging of what we associate with being thoughtful.Adam Valdez, Production Visual Effects SupervisorHumans tend to anthropomorphize animals. There are funny YouTube videos that people make that show how their dogs react to things, Valdez states. The best ones are when the dogs have done something naughty and the person is like, Did you do this? And the dog looks away or hides its head. You get this thing where people map themselves onto dogs and animals. Part of that is because we do recognize that they have a lived experience, and as animals ourselves we overlap with them. Ive also had dogs in my life and have liked animals since I was young, so I dont know if I anthropomorphize them or the more you study animals, the more you realize that there are a lot of commonalities.In order to achieve a realistic look for the fur, each lion had over 30,000,000 hairs.For the shot where Rafiki makes a snow angel, 620 million snow particles had to be simulated.Among the virtual production techniques was a new tool called QuadCap. The QuadCap and capture was for the virtual shoot phase and allowed Barry to direct performers and get stuff in the can that they could cut with and work out the staging, explains Valdez. Sometimes, there were little idiosyncrasies that are in there that we keep until the end, but after that its a keyframe animated movie. You can usually find correlating movements from the motion capture and real-life references that give you hints to how the body mechanics and gravity are working that keep you grounded in reality. Like with all filmmaking, you are art-directing everything. There are aesthetic choices. With animals, its a lot of negative space. Animals are busy movers until they pause because theres a moment of focus or concern. You can use it for dramatic effect. Its an anthropomorphic merging of what we associate with being thoughtful.The FX team at MPC simulated an average of 100 million water particles per shot for the flash-flood sequence.[T]he piece is in some sense reestablishing some of the unspecified things that you might have little hints of in other [Lion King] movies. You have to know how it fits, but you cant be, Were not going to do this because theres no proof of it anywhere else. You have been given some license to be creative.Adam Valdez, Production Visual Effects SupervisorVoice actors work in various ways. Some voice actors are in the page and imagining they are the character, observes Valdez. Other voice actors are gesticulating, waving their arms, and moving around the room because they need to be more physical in the way that they do it. You dont always have the exact physical performance from the voice actor that necessarily fits that scene because sometimes they might have recorded it months before and the edit has taken on a slightly different shape, and we need a new emphasis in the acting, but the voice still works. You try to turn it slightly for what that beat needs. Singing and spoken dialogue are not treated differently. Its crazy how much we marry the sound and the face. We need them to be aligned. When they diverge, it bumps you out of the story. If they are belting a line and you have tiny mouth movements, its going to be worse for you because youll be like, Something is wrong with this. There are some moments where they are belting the line, and if you were to look at an animated movie of a person or live-action of a real singer, their mouths are going to be big and wide open. But for the rest of it, you follow the same basic guidelines, which is you dont articulate every single, tiny syllable. Mostly what you have to pay attention to is how much breath power is coming through.Real-time motion capture transformed human performances into digital lions, which allowed for scenes to be choreographed and blocked.About 118 unique characters had to be designed, created and executed. Doing a movie like this, you have a giant breakdown of everything you need to do, Valdez states. There is definitely hero, secondary and background characters. The trick is, sometimes the camera ends up on a character that might not originally had been expected to be foreground, so you have a minimum and can always plus up something if it gets featured. For certain, the hero talking cast that needs to carry the movie has not only more time and money, but it has a different set of process steps. You stress test it, put it through its paces and put it into shots. We dont even consider the puppets and the characters finished until we get the first batch of final shots with them that everybody has approved. Then you start running. You have a spreadsheet of hero to background, their uses, how often they show up and all of the things they have to do. Does this lion go into the water or roll in the snow? You take it case by case.More than 1,700 artists, supervisors and production crew at MPC contributed to making 1,500 shots.It takes time on these big movies for the lessons to permeate down through [the] layers of teams, and usually you get gifts bouncing back up because individual artists produce cool, inspiring ideas that might be unexpected. Then, eventually, the director is finding the combination of things that work. It was a lot of trying to replicate your successes.Adam Valdez, Production Visual Effects SupervisorCreating a snow angel is the mandrill known as Rafiki. Rafiki not only does a snow angel, but he climbs up onto Mufasas back in the same shot. We have snow all over their fur and a lot of interaction between furry characters, Valdez remarks. MPC had to build a new fur system called LOMA. MPC has been doing furry characters for a long time, but there was a desire to update the system, and this was a good movie to do it. Part of the point was to bring together the tools that do fur with ones that do complex simulations like snow or water. By bringing them together, you can make them feel as if they were interacting in a genuine way, and it offered the ability to do new complicated stuff. That shot has something like 600 million particles simulation points in it because Rafiki is actually in the snow as opposed to snow on him.Driving world-building was the fact that the characters embark on a journey.The art of world-building is creating sets that feel natural even though theyve been designed around what you need for particular sequences.Driving world-building was the fact that the characters embark on a journey. There was a notion at one point in pre-production about what would that map look like in Africa, if we were to draw it? Valdez states. If you line up the sequences, there is an arc to it. This culminates in the one of the biggest dramatic passages that happens, which is that they cross over a series of mountain peaks that are all snowed in. You start to have sequence alignments with different ecosystems inside of the continent of Africa, which are sampled from different countries. Whether or not this was a plausible journey or not is immaterial. You are creating a figurative journey. Then you send local people out who know these areas in Africa, and they shoot pictures and bring back a whole scouts worth of inspiration and also source material.The behaviors could not break away from characters animalistic nature but, at the same time, they had to be emotionally recognizable for the audience.Then youre in pure design territory, Valdez continues. What do I need this environment to look like? But youre using all of that source material as a grounding truth. You create sets that feel natural even though theyve been designed around what you need out of them for that sequence. Then you have library elements of specific foliage, rock structures, minerals and landscape pieces that are in the Unreal Engine phase. We had a beautiful set of sets in Unreal Engine; shoot within them and you end up with your selected takes. Now you know where the camera is actually looking. Now the photorealistic process is to go back to all of those original sources and model up trees, bushes and grasses. There is a multilayered approach to environments where youre combining those more library components with hand-sculpted terrain, textures and multiple layers. Theyre even doing things, like simulating where water would flow and how items would build up in the cracks based on waterflow. You get these realistic corrosion patterns, which is all part of imbuing it with naturalism that is also art directed. Youre getting hints everywhere you look that are inspired by real places and the little rules of how that ecosystem evolves and looks weathered, old and has a history built into it. The acting and the world-building in a movie like this are the two massive pillars that everything is hanging upon.Animals are busy movers until they pause because theres a moment of focus or concern. This is a natural behavior that can be used for dramatic effect.Driving the visual storytelling was the script, and asking, What does this story want to be?Watch how MPCs Charater Lab brought Taka to life from the ground up for Mufasa: The Lion King, and how MPCs in-house grooming system, Loma, helped achieve photorealism in the appearance of Taka and 118 unique animals. Click here: https://www.youtube.com/watch?v=TDHfcEZKd5w0 Comments ·0 Shares ·151 Views
-
GOING FULL THROTTLE WITH COLORS FOR DAN DA DANwww.vfxvoice.comBy TREVOR HOGGImages YT/S,D courtesy of GKIDS.Capturing the spirit of the manga through an off-the-wall hybrid of aliens, ykai [supernatural beings and phenomena in Japanese folklore] and high school romance is the Crunchyroll, Netflix and GKIDS anime series Dan Da Dan. The original ink drawings of Yukinobu Tatsu have been reinterpreted by director Fga Yamashiro and Science SARU with a color palette that ranges from monochromatic to muted pastels depending on the theme of the scene in question. The story revolves around a dare between two classmates resulting in one becoming possessed by an evil spirit while the other is abducted by intergalactic beings that accidentally unlock her psychic abilities. The unlikely duo face a number of adventures together that cause them to utilize their newly acquired powers and in the process of doing so fall in love.There are only so many colors in the world. It is intense when it comes down to Dan Da Dan because when you can put one color on the screen, your options suddenly become limited. The approach right now is to go full throttle every time, and well see!Sophie Li, Color Script Artist, Science SARUColor Designer Satoshi Hashimoto and director Fga Yamashiro shared an understanding as to what the base colors should be for each of the characters appearing in the 12 episodes.Handling the color design were Satoshi Hashimoto and Makiho Kondo, with the former working on projects such as Spy x Family, Death Note and Paprika. First of all, I want to talk about briefly the pipeline order in Japanese animation, states Satoshi Hashimoto, Color Designer at Science SARU. Basically, what happens is that the colorists, background artists and animators are separated. Then, as things go, first of all the director decides the direction he wants to go in, and he orders the backgrounds based on that. He gets those backgrounds back, then he brings them to us and the color team, and talks about what he wants to do. Then, we make the colors using those backgrounds, based on his and our own ideas.Working on a subliminal level is the color script. Its more like music, notes Sophie Li, Color Script Artist at Science SARU. Its underlying but conveys emotion, an energy and sometimes story through color. When I start thinking about the colors for a specific show, what is important is to try to understand the tone of the project and the directors vision; whatever references they are using and how the color in those references work and try to apply the logic into this new show that were making. Thats how color script works. It also involves a lot of figuring out light directions. Sometimes, light and dark values in a scene also comes in. When I do the color script, I try to lead with the emotion of the scene. I want the audience to know where we are in the story, and they will remember what the color combination is on the screen. After watching the show, they will know this is the beginning, middle and end of the fight. It progresses clearly, which is something I hope to achieve in the color script.A graphic panel recreated from the original manga that has been reinterpreted with color.Its more like music. [The color script] is underlying but conveys emotion, an energy and sometimes story through color. When I start thinking about the colors for a specific show, what is important is to try to understand the tone of the project and the directors vision; whatever references they are using and how the color in those references work, and try to apply the logic into this new show that were making. Thats how color script works.Sophie Li, Color Script Artist, Science SARUDetermining the color palette for each scene is whether the adversary is an alien, ykai or combination of the two. I have been with this production since pre-production, so when we were figuring out the direction of the series, I was doing things like image boards, Hashimoto explains. Even at the early stages, the director had a strong idea as to what he wanted to do with the color direction for the ykai and aliens. The idea has always been that the ykai, for example, Turbo Granny or Acrobatic Silky, are warm colors, like a reddish or pinkish for the characters in question. Whereas, the aliens are the opposite to that. They have colder colors to them; for example, the Serpo aliens that you see in the first episode are bluish. When Okarun is cursed by Turbo Granny, he gets her red color, and you can see his hair and the tips of his cloths turn partially red. Those were all requests from the director. A retro aesthetic was adopted for the everyday scenes because of the frequent references to Japanese pop culture in the manga. A lot of Japanese animation nowadays is focused on being flashy, vivid and high saturation all of the time. But this time we wanted to go with a slightly less saturated, more relaxed look for the screen and save that vivid high-saturation, more modern anime look for the action scenes to make them pop.Cool colors were used to distinguish the alien characters, with blue designated for the Serpo.Dan Da Dan is crazy! laughs Li. It conveys so many genres. It was so free even though there were references given by the director, like for his fight scene, which was inspired by a particular fight in Ultraman. Its important to put myself back in the box and say, This is heavily influenced by older filming techniques and that kind of feeling. I think about it as if Im using a real lightbulb somewhere in the scene and non-digital older colors, too. A lot of time was spent developing the color language for Episode 101. When the color is dyeing the whole scene, I try to use lighting to balance it out so you know Momo is under this spotlight and whats happening. When its a running around sequence, the color balance is always hard to keep, like Episode 104 with the big fight with the crab, Li says.When Momo unlocks her psychic powers, she is given the theme color of emerald green.Some of the scenes look like actual ink drawings taken from the manga, such as the Fleetwood Monster fight, and this applies to some of the other colors that we use as well, especially for black and white where it gets a lot harder to depict depth, notes Makiho Kondo, Color Designer at Science SARU. The characters are put into a similar tone, and you have to rely on brightness. It was difficult to construct the screen in a way that even with those colors, it would still have the depth the director was looking for. We ended up having a lot of back and forth with him where he would say, We could make this darker. We can make this whiter or brighter. We were doing a lot of individual smaller changes to make sure that we felt the level of depth between the characters or between the characters and the background. Then, the ideas about the insert colors, for example, the red in the scene when Okarun uses his power, and in his hair and face lines; director Yamashiro was particular about having that stand out in any scene he used it in. The black and white was particularly good for helping us have the red stand out.The emerald green of Momo and red of Turbo Granny-possessed Okarun collide during the alien abduction sequence in Episode 101.Tones of gray are explored for the transformed skin of Okarun, while different shades of brown are considered for the eyes of Momo.Each of the transformed states of Okarun had to be color designed, with red highlights a common throughline.When Momo and Okarun defeat Turbo Granny, all the spirits of the deceased girls that have fused with her are unleashed like a watercolor painting.The characters have their colors, and we had to take into account. For example, the auras; if there is going to be effects placed on this in compositing, how is that going to change the color? What are shadows going to look like? A lot of it was working with both the director and compositing team to decide on colors that will still look good even placed under compositing effects or pushed in a different direction. There was a lot of trial and error back and forth to make sure that the final result was what everyone was looking for.Satoshi Hashimoto, Color Designer, Science SARUThe color treatment of effects was determined by the scene itself. For the regular type of effects, like explosions in normal scenes, we would go with a baseline normal color, Hashimoto remarks. But we paid a lot of attention to how the character effects should be influenced by the image colors that a character has. For example, Okarun is using some sort of power, and we flash some red in there. Another big one is Momos psychic power; we were using her specific colors for that as well. Careful attention had to show how digital augmentation would impact the desired color. The characters have their colors, and we had to take into account, Hashimoto states. For example, the auras; if there is going to be effects placed on this in compositing, how is that going to change the color? What are shadows going to look like? A lot of it was working with both the director and compositing team to decide on colors that will still look good even placed under compositing effects or pushed in a different direction. There was a lot of trial and error back and forth to make sure that the final result was what everyone was looking for.A color-inverted environment was created to emphasize the growing psychic powers of Momo.To understand the motivations of characters, backstories are a significant part of the storytelling, and nothing is more tragic than what led to the existence of Acrobatic Silky. Of course, we had a color script for that which helped to decide the overall direction, Hashimoto explains. The way I was thinking about it was that we have the action scenes, and the strong pink was based on grudges Acrobatic Silky was holding; however, when we were dealing with the mother, I wanted to represent the free parts of her life, the times she was spending with her daughter and her truer personality by using the opposite color to pink, which in this case would be a blue. I wanted to create a gap between her true feelings and sadness; for example, her life with her daughter versus the pink you see during the action scenes when she becomes a monster.When Turbo Granny is winning during the fight with the Serpo aliens, her red color becomes more vibrant and dominates the frame.Piecing together the frames when the psychic powers of Momo are awakened, as well as a black-and-white moment from the Fleetwood Monster fight.The opening title sequence provided an opportunity to experiment with the colors free from the storytelling.The main purpose of the opening title sequences is to be fun to watch and make the viewer interested in finding out what happens next.The spirit of Turbo Granny gets transferred into a doll that poses with Aira and Jiji, who both get possessed by ykai and inherit their abilities.Looming over the narrative are the missing testes of Okarun, which have been turned into golden balls by Turbo Granny and are desired by a number of opposing forces for different reasons.Color Script Artist Sophie Li was given the freedom to come up with the character closeup shots that begin the opening title sequence.The color script for when Okarun gets possessed by Turbo Granny in the tunnel and Momo gets abducted by the Serpo aliens, as well as the fight sequence that occurs when the two stories intersect.Black and white prevail when Momo uses her psychic powers to find and grab hold of Turbo Grannys aura. I had a strong idea already, Li recalls. It was more like making concept art. This particular shot was like x-ray vision and simplified lines that inverted the color. I was testing out how the grading of the hand works to make it simple. The main point of that sequence was to make it readable, and nothing matters besides her hands, which are her theme color of turquoise, and the color of the flame of each character. Everything else is black, grey and white lines; that remains consistent every time she uses her powers. A massive crab goes through different shades of red as it attempts to catch Okarun and Momo. I tried to think about the different elements and the part of the fight where you have to let your eyes rest a little bit. When the crab is boiled, it is more orange, and when Turbo Granny goes into the final form with the crab, it is more of a darker red. The progression shifts subtly in the show, but when I do the color script, it is more condensed.Each fight has its own theme color, with the Fleetwood Monster getting a black-and-white treatment, with the red associated with Okarun serving as an accent.During the crab chase, the color script reflects who has the upper hand at a particular point of the narrative.The color script for when Momo is using her psychic powers to find the whereabouts of Turbo Granny by locating her aura, which resembles a red flame.Reflecting all the Japanese pop culture references in the manga, a nostalgic retro feeling was adopted for the daily life scenes.Turbo Granny does not always retain the same red color; at times orange takes over depending on her mental state.Momo and Okarun during a moment in the black-and-white fight with the Fleetwood Monster in Episode 102.Emerald green is the theme color for Momo and is represented by her dangling earrings.Light and dark values are essential when directing the eye of the viewer, especially when a single color overpowers the frame.The typical vivid, high-saturation aesthetic prevalent in anime was avoided for the everyday moments.Pink was the theme color of Acrobatic Silky, with the black hair serving as reminder that she is an evil spirit.The backstory of Acrobatic Silky is tragic, and blue was used to contrast with her theme color of pink for the moments where she was a loving mother to her daughter.A dramatic moment occurs when Acrobatic Silky dances on a rooftop where the light is celestial.Adding to the drama of Acrobatic Silky fading away is the muted rusted, decayed background that resembles an abstract pastel sketch.Dan Da Dan has been both challenging and rewarding. There are only so many colors in the world, Li acknowledges. It is intense when it comes down to Dan Da Dan because when you can put one color on the screen, your options suddenly become limited. The approach right now is to go full throttle every time, and well see!0 Comments ·0 Shares ·170 Views
-
DIGITAL DOMAIN ALWAYS KNEW IT WAS AGATHA ALL ALONGwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Digital Domain and Disney+.WandaVision spin-off Agatha Harkness, the main antagonist in the Disney+ miniseries Agatha All Along, must travel the Witches Road to regain her powers. Created by Jac Schaeffer, the dark supernatural fantasy has been digitally enhanced by Production Visual Effects Supervisor Kelly Port and main vendor Digital Domain, which was responsible for magic effects for the Broom Chase and Agathas death and ghost. The visual effects work had to feel as real as possible, which meant keeping things grounded and limiting the amount of fantastical CG, like camera moves that could not be achieved in real life.Teen gets his mouth digitally sealed shut.The Broom Chase was filmed by drones going down each straightaway. The problem with the Broom Chase was that we were supposed to be cruising through this forest, and the drone only has so much runway before it has to make a turn. There was an array setup on a drone of three 8K cameras giving us coverage dead ahead and off to the left and right. We could then stitch that back together to give ourselves a large canvas to work with as to what we are going to see when were coming down the road.Michael Melchiorre, Visual Effects Supervisor, Digital DomainThat was a consistent directive throughout the entire show, states Michael Melchiorre, Visual Effects Supervisor at Digital Domain. What production tried to do on set is to shoot as much of it practically. When theyre walking down the road through multiple episodes, that was an enormous set build in a warehouse, and a lot of those shots were subtle extensions in the background. Witches Road was treated as a member of the cast with attention to detail ranging from the density of the leaves, the shape and height of trees, and the curvature of the road. There is one way through, and you can only go forward. We would use our digital forest to fill in anywhere it felt like, Maybe you could have turned off that way. The environment was built to withstand any possible camera angle. There were a couple of shots where we opted for our full CG road, and they were indistinguishable from the plate versions. You could go right down to the ground level if you wanted to.For safety reasons, there were moments where fire simulations had to be relied upon.Despite the massive size of the Witches Road set, it was still not long enough for the Broom Chase. You had a road that would go straight, turn left and come around in a circle, Melchiorre explains. The Broom Chase was filmed by drones going down each straightaway. At the same time, the set pieces could be rearranged. There were trees on wheels that would be rolled in, and they would run the drone down again. You get these subtle variations. So many of the trees are gnarled and distinct that if you rotated them a little bit, it almost gave you completely different-looking trees because there werent straight trunks. The problem with the Broom Chase was that we were supposed to be cruising through this forest, and the drone only has so much runway before it has to make a turn. The road was always straight ahead, meaning they werent making hard lefts or rights on their brooms. There was an array setup on a drone of three 8K cameras giving us coverage dead ahead and off to the left and right. We could then stitch that back together to give ourselves a large canvas to work with as to what we are going to see when were coming down the road. For the speed of the Broom Chase, you lose real estate fast. Once you rephotographed those large 8K tiles with the camera that matches with the corresponding lens, you gain a little bit more real estate, but thats where we also built our fully digital recreation of the road that was used to extend and enhance.Tesla coils and lightning were used as inspiration for the magic.The purple magic of Agatha was fun to work with as it provided the greatest creative latitude.Despite the massive size of the Witches Road set, it was still not long enough for the Broom Chase.Witches on flying brooms are a staple of the fantasy genre. On set, there was a small saddle that was covered by the costumes for the most part, Melchiorre explains. We had some cases where it had to be painted out. Extensive previs was done to try to figure out what the sequence needed to look like, so there was an intent to shoot the plates to approximate that as much as possible. It worked out in a lot of cases, but for other ones, afterwards we went, This camera angle would be better. Once we had our plates and backgrounds sorted, knowing what this shot is going to be down the road, thats when we went to our animation team. We had the animators use our recreation of the forest to match what the chosen array footage was doing. Each member of the Coven was shot on a bluescreen hanging by a harness, acting and weaving as they were flying their broom. Compositing quickly extracted all of those characters to give the animators something to work with. The animators then took those and moved them down through our CG forest and matched the orientation of the plates we were using. That helped us keep our scale, position and depth of all the Coven members consistent between themselves and the forest trees. Otherwise, we could run into issues where their depth is based on what were seeing in screen space.A signature visual element is the blood moon.Tesla coils, lightning and WandaVision were points of reference for the magic effects. The magic was a big compositing development item, Melchiorre states. There was a scene in WandaVision where Agatha uses her powers to suck the life out of other witches, so we know what these things look like. The challenge was to replicate that and have six different versions of it, plus there is a gaggle of generic witches we see Agatha suck the power out of over the years. They were all various colors and strengths and had to be slightly different. On WandaVision, when we did Agathas powers, the effects team ran multiple simulations. For this show, it was calling back to the Ghostbusters beams or the Emperors lightning from Star Wars: Episode VI Return of the Jedi. We wanted to lean on those techniques, so all the magic that we see in Agatha All Along was created in Nuke with a combination of particle generation, Tesla coil and lightning elements, using the Higx Point Render plug-in in Nuke to produce noise patterns that we could then rotate around to some extent and get perspective on. There were instances where we wanted a bolt of energy to hit one finger [then transfer] to another. Compositors went in and literally drew lightning energy going from one finger to another.For the ghost effect, the face was most opaque part of the body to retain the performance of the actress.During the climax of the series, Teen gives his powers to Agatha. Agathas powers grow out from her like tentacles, and as it grabs Teens powers, it would alter his blue to her purple, Melchiorre remarks. When we did that, it would reverse the direction of the beam and suck back towards her. There were some interactive shots, but not for the most part. There was lots of elbow grease and compositing to relight parts of the plate and light up the hands and face, as best we could, to help tie to the proximity of the magic going by or hitting them. Youll see a number of shots where Agatha is being fired upon with lots of beams hitting her shoulders or torso. We always made sure that the intensity of the beam and contact point were in sync with each other. If a bright piece of beam came in, the piece that hit her would always be bright and dissipate out as a darker piece of beam that came behind it. Many of those were almost frame-by-frame dialing up to make sure that contact was tight. Much of it was achieved with interactive light on the character being hit. We did a little bit of distortion around the contact points to give a subtle movement to it. But as far as cloth simulations or replacing cloth, that wasnt one of the driving forces behind this. We wanted to keep it as tied to the plates as possible.The spectrum of magic colors included purple for Agatha, blue for Teen, green for Rio Vidal, orange for Alice Wu-Gulliver, yellow for Lilia Calderu and pink for Jennifer Kale. Purple was fun to do because it charges up the most and gets the wildest, Melchiorre notes. Most of Agathas stuff was done at night, so the purple popped against the night sky. When she was taking in all the power from Teen, her stuff goes well beyond anybody elses. Agatha is the character everyone loves to hate. Yellow was the toughest to get to show and feel hot, lit and bright in Lilia Calderus environment, which was more daylit; we only had a few shots of her magic. Each of the colors has a bright white core. The white core helps give the magic a heat and brightness that make it feel more like light instead of an oversaturated solid color.A shift in the color grading creates a more ominous atmosphere.The biggest thing that we had to focus on were artists, especially the compositors. Coming from a compositing background, knowing the productions goal, which was to try to use several techniques and methodologies from 30 to 40 years ago, [the challenge] was how can we stack our team with people who can do that? Nothing against younger compositors now, but they have never had to use practical elements and make something out of nothingMichael Melchiorre, Visual Effects Supervisor, Digital DomainDesiccation occurs to the recipient of the power sucking. This was tying back to WandaVision where Agatha does it to the Coven and Wanda, Melchiorre remarks. When she is sucking their powers, the victims face and skin dry out, sucks in, wrinkles and desiccates. On this show, we had a couple of hero ones, like when Agatha dies and Teen gets his powers drained from him. But we also had a Witches Through Time sequence toward the end of the series where we see how Agatha has been spending her time over the last hundreds of years conning other witches to unknowingly give away their powers. That meant another six or eight witches who needed to be desiccated. Back on WandaVision, it was one or two hero characters with a full 3D head rebuild that could be desiccated and animated. We did that same path for Agatha and Teen. There were full hero replicas of their heads that could be desiccated and animated and be shrunken and wrinkled as they lost their powers.The desiccation effect is applied to Agatha.[A]ll the magic that we see in Agatha All Along was created in Nuke with a combination of particle generation, Tesla coil and lightning elements, using the Higx Point Render plug-in in Nuke to produce noise patterns that we could then rotate around to some extent and get perspective on. There were instances where we wanted a bolt of energy to hit one finger [then transfer] to another. Compositors went in and literally drew lightning energy going from one finger to another.Michael Melchiorre, Visual Effects Supervisor, Digital DomainNot all of the actresses portraying the generic witches had scans done of them. For those, we opted to do a pseudo 2D/3D approach where we had a Gen Woman, our generic in-house woman rig, and we had one generic desiccation setup on that body and head, Melchiorre explains. With that, we gave it the same layers and AOVs that we would have had on one of the hero heads, but done to a generic face. Unwrapped textures were given to the compositors who then wrapped the textures back onto generic geometry that matched each one of the sub-witches as closely as we could. Each one had a slightly different nose, or the eyes were more spread out. They were tailored to each individual actress in compositing, then that generic set of desiccation tools, or layers that we used on our hero ones, would be transferred to each of the other witches. Compositing would take it from there and dial those in. It was more of a 2D approach, so we had to fight some things where it felt more of a texture on the face at times. We did a lot of that with light and shades to make it feel like there was more depth to the wrinkles than there actually was.The emergence of purple flowers and mushrooms indicates a rebirth for Agatha.Rio Vidal is revealed to be Death.Ghostbusters and Poltergeist influenced the ghost designs for Agatha and her mother Evanora Harkness. We went through numerous iterations of how much is too much because everybody has in their mind, This is a ghost, Melchiorre notes. There are a couple of things you must have. Ghosts usually have some white flowy mist; its translucent or sometimes transparent, but you need to be able to read what youre looking at. If you go too transparent, the background starts to take over. You cant be completely transparent because of the fact that the form doesnt affect whats behind it all. The key is being able to read the performance. Agathas ghost form was an all 2D approach. She was shot on set in a particular environment, which was great because we got her lighting to help us tie into the photography. We had to roto her out completely, rebuild the room behind her, and we laid her back into the plate through a number of luma keys and soft mattes. We subtly bring it back piece by piece to build the opacity of her ghost from her face outward. The face was always the opaquest part, but you could see through it. We kept the eyes, nose and mouth relatively intact. We never wanted to lose her eye direction. As we go out toward her arms and extremities, her gown gets softer, and thats where we tried to play up the transparency and translucency. The other thing people have in mind is that ghosts move slowly, so a lot of times the on-set fans were moving the cloth and hair too much. Compositors then re-timed the flowing fabric and reincorporated that back into her ghostly form. We used transparency to help bridge that gap between the fast-moving fabric and the slower extremities that we wanted to get. It was quite a puzzle to put together.Careful attention was paid to the contact points when the power-sucking effect was applied to characters.The Witches Road environment was an incredibly detailed digital asset, down to the density of the leaves, the shape and height of trees, and the curvature of the road.Agatha sacrifices herself to save Teen. Agatha uses her powers to suck in and take the power from Rio, who we find out is Death, Melchiorre explains. We wanted this sequence of death and rebirth to happen over two or three shots where Agatha hits the ground, and as she lays on the ground her body starts to rapidly decay and break apart. But at the same time, the grass, flowers and mushrooms of the backyard start to grow and cover her. The mushrooms also die, and from there we have a rebirth of her purple flowers to signify her rebirth as a ghost. What we were aiming for was the image of a nurse log in the forest, a decaying log with mushrooms and moss growing on it, but at the same time its beautiful to look at. There was talk early on about how we could do that in 2D. Luckily, that was abandoned quickly. The burial mound is a full CG simulation down to single blades of grass, little pebbles and dirt falling off. Our artists had control over the speed and size of every mushroom that comes up and dies.In the end, staffing was the major challenge. The biggest thing that we had to focus on were artists, especially the compositors, Melchiorre states. Coming from a compositing background, knowing the productions goal, which was to try to use several techniques and methodologies from 30 to 40 years ago, it was how can we stack our team with people who can do that?Nothing against younger compositors now, but they have never had to use practical elements and make something out of nothing, to some extent. There is a lot of per-shot work that needs to be done to make these elements work like the magic. It was all set up in Nuke, but each individual artist had to marry the elements in a way that is a bit different from getting another simulation from effects.0 Comments ·0 Shares ·199 Views
-
CELEBRATING THE INVENTIVENESS OF WALLACE & GROMIT: VENGEANCE MOST FOWLwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Aardman Animations and Netflix.What began as a National Film and Television School graduation project called A Grand Day Out, where an inventor named Wallace and his faithful canine companion Gromit travel to the Moon to sample some cheese, has become a cultural institution. Its incredible, reflects director Nick Park.Ive got to constantly pinch myself. For years it was slow growth. It didnt happen overnight. Recently, we went to the States and to see security guards demanding to open the box to see the puppets, then profess they only wanted to meet Wallace and Gromit. Just think how many people are watching now with Netflix reaching all over the world. It was a student film when it was started. I was always ambitious in my own way. If it was on the BBC at Christmas, that would be lovely.Director Nick Park wanted to create a fast stop-motion sequence, which led to the classic toy-train chase in the Oscar-winning short The Wrong Trousers.[W]eve pushed Wallaces and Gromits relationship more than ever before. It used to be a joke that Wallace never learns anything. Actually, in this film he does look at himself a little bit. We have pushed him emotionally in a very British way. What that does is open up a bigger feeling within the film, and hopefully by the end, it fills your heart with warmth.Merlin Crossingham, DirectorOne of the favorite villains of the franchise returns in Wallace & Gromit: Vengeance Most Fowl after being thwarted by the title characters in The Wrong Trousers. Feathers McGraw is a tech-savvy, jewel-thief penguin determined to reclaim the blue diamond and frame the duo responsible for his arrest. As before, an invention created by Wallace gets hacked by McGraw. This time around, the mechanical pants are replaced by a computer-programmed, personal-assistant gnome christened Norbot. Story ideas are not limited by what is feasible to do practically. When we were working with Mark Burton, the scriptwriter, we said, Lets just talk about the ideas. If we want to go big, lets go big. If we dont, then lets not, director Merlin Crossingham states. It is important for a director to feel that you have the freedom to go wherever you want to go to tell the story.Concept art for the Cellar Cauldron was inspired by the Orc-making factory featured The Lord of the Rings trilogy.Capturing as much in-camera as possible was the goal. We would begin with the stop-motion process at the heart of any sequence, Crossingham explains. If we couldnt use stop motion, whats the solution? Or, if we couldnt get the shot in one pass, is it a composite of multiple layers or different scales, or the use of digital set extensions, DMPs or full-on visual effects? We didnt say we would draw the line. We only wanted to use whatever was the best to tell our story. We ended up using some of the latest cutting-edge fluid simulations and marrying them with classic stop motion. Its a lovely harmony between the two. Even with the opportunity to do anything digitally, that was not a desirable option. At the heart of it, Wallace and Gromit is clay, Park states. Its always got to be clay; the actual character work. Its part of its beating heart and is the ethos of everything being handmade. There are some things you cant do to expand the world. Even when we do that, its got to fit the stop-motion feel. Its got to be in style. If it feels like youre putting live-action water into a stop-motion environment, it doesnt work. The visual effects guys were great and versatile with all of that.The lighting is a character because at every point, it not only tells the story but embodies the characters. It seems like an obvious thing, but it doesnt always happen in the films that weve made. They hit a sweet spot for us on this one.Nick Park, DirectorPark has described Wallace & Gromit: Vengeance Most Fowl as Cape Fear with penguins. Its got a comic conceit behind the idea, Park reflects. Its not like I would think of a lovely story and then add the gags. Its like a wish list of things we would love to stay and will do what we can to twist and turn to get those things in as long as they dont feel too contrived. Everybody has a limit! A homage to The Matrix occurs when the green streaming computer code from the computer screen is reflected on Norbot, the personal assistant invented by Wallace that gets hacked by the vengeful Feather McGraw to aid in his escape from his zoo incarceration. That was one Dave Alex Riddett, our Director of Photography, was determined to try to do practically, and he found this projector that not only could focus close and small but be able to stay on for weeks on end and not fluctuate picture position or color, Crossingham remarks. It was fantastic.A character painting of Norbot by director Nick Park.An ongoing Wallace and Gromit theme is technology being invented with good intentions but being subverted for nefarious purposes. In a way, it was part of the premise of this film that our angle of attack on the story was that things have reached a point in Gromits life where its getting too much, Park remarks. Wallace believes that tech can solve all problems, even emotional ones and relationships. Its seriously not working. Its not so much evil because obviously Wallace is well-meaning, but its that kind of a deluded state. The invention of Norbot goes from insult to injury for Gromit. Theres always an outside evil force in the Wallace and Gromit films where its not just going wrong, because its got to have more motivation and incentive than that. Wallace has made a technological breakthrough with Norbot. This is the first time that Wallace is actually efficient because normally his inventions are not, Crossingham observes. As Wallace says, Its his best invention ever. Be careful what you wish for.Two pivotal antagonists are Feathers McGraw and Norbot, with the former being an established character and the latter being a new addition to the franchise. With Feathers McGraw, it was about continuing where he had gotten to at the end of The Wrong Trousers so it felt like we all recognized him as the same evil penguin, Crossingham explains. It was about his stillness, and the way he moves is a glide rather than a wooden, wobbly penguin comedy walk. Feathers is very serious. Feathers only moves a little bit, and when he does, because hes so graphically simple, you really see it. It was about finding where those sweet spots are and what is the absolute minimum to get maximum effect. That is challenging for an animator because an animator loves to make things move. It was similar for Norbot as well. For the first prototype of Norbot that the modeling department made, we had the eyebrows move, and as we tested the animation, he looked too human. We started paring it back and solidifying his eyebrows, not moving his eyes, and making his jaw like a ventriloquist dummy; the more we did that the more he felt robotic. It also meant we could play that balance between an ever-so-slightly sinister feeling. yet smiley, charming and happy at the same time.Conceptualizing the train and aqueduct chase sequence that takes place in a valley.We only wanted to use whatever was the best to tell our story. We ended up using some of the latest cutting-edge fluid simulations and marrying them with classic stop motion. Its a lovely harmony between the two.Merlin Crosingham, DirectorLighting was viewed as a character in its own right. Feathers moves so subtly and little relative to the other characters that we have to use all of the filmmaking techniques we have available to us, Crossingham observes. You can keep Feathers still but push the camera in on him slowly, make sure that the lighting has the atmosphere, and the sound. For the lighting, we specifically looked at film noir and Alfred Hitchcocks Rebecca, particularly the character of Mrs. Danvers and the way that she would emerge or retreat into the shadows. The lighting is a character because at every point, it not only tells the story but embodies the characters. It seems like an obvious thing, but it doesnt always happen in the films that weve made. They hit a sweet spot for us on this one.Concept art for the scene where the submarine emerges in the zoo enclosure with Feathers McGraw channeling his inner Ernst Blofeld.Blinks and hand motions are utilized as exclamation marks. It slightly differs among the various characters, Park states. With Feathers McGraw, it was minimal. Just a blink now and then says a lot. Its all about using a little to great effect with the music and camera moves; the audience believes that Feathers is thinking and the cogs are turning in his head. The Norbots are an extension of Feathers McGraw. Sometimes, its too much to blink even once. Sometimes, its far more sinister to look. Sometimes, we put blinks in and took them out again because its too much. Its all acting. The animators are actors. Actually, we act a lot of it ourselves on video as a way of conveying to the animators what we are after for a particular shot, what the drama is about, what the comedy is, where the laughs are, and the comedic timing. It helps. They dont follow it exactly, but its a way of explaining where we want the shot to land for timing and gestures. Its all about the hand movements and head tilts.Resembling the Orc-making factory constructed by Saruman in The Lord of the Rings trilogy is the set known as the Cellar Cauldron where mass-produced Norbots are manufacturing something mysterious for Feathers McGraw. Our electricians got inventive and used traditional techniques of getting gobos moving against each other with random patterns of light going through them, so it looked like there was rippling fire and light on the walls, Crossingham explains. We used the projector that had projected the numbers onto Norbot with some animation on it to look like explosions of fire. Then it was augmented with CG sparks and other digital effects on top of it afterwards. Principally, its good lighting in Wallaces and Gromits basement. We wanted it to feel rather Orc-like.Part of the aqueduct was physically built with the rest extended digitally.One of the established sets is the house of Wallace and Gromit. Technically, the house changes enormously according to the necessity of the stories in each film, Park states. Its an anamorphous house really! Sometimes, it had a big front garden and other times a short one. We tried to be consistent with the sense of comfort and nostalgia about the house and where everything comes from. We had to slightly update things because of the subject of AI and computers, because we never say where Wallaces and Gromits world exists between the 1950s and present day. We have never gotten as far as smartphones. Theyre usually objects that still have a lot of old-fashioned-character. It still had to be an antique, even if its a computer.Wallace & Gromit: Vengeance Most Fowl provided an opportunity to implement some productions, as the last significant Wallace & Gromit outing was the short A Matter of Loaf and Death released in 2009. From the inside out, it was an opportunity regardless whether you could see it on the outside make things better for the animators in terms of the armatures, for the rigging department in how they could connect the armatures and for us to take advantage of things, like silicone being better than it has been and to use it appropriately on the puppets where we might use foam latex in the past or even clay in some places, Crossingham remarks. Wallaces and Gromits hands and faces are always modeling clay because its not only essential thats how they look, but its the best way of getting a good performance. There is a lot of new material technology in the characters, but hopefully it doesnt show too much. It was essential that they still kept the handmade clay thumb-iness.Greenscreen was utilized to get the necessary scope for scenes.Creating the various facial expressions for Wallace.At the heart of it, Wallace & Gromit is clay. Its always got to be clay; the actual character work. Its part of its beating heart and is the ethos of everything being handmade. There are some things you cant do to expand the world. Even when we do that, its got to fit the stop-motion feel. Its got to be in style.Nick Park, DirectorGromit remains a difficult character to animate given that he expresses himself through pantomime, not dialogue. All of the animators require a refresher course where they get three weeks of constantly animating Gromit and testing different expressions, Park explains. How high or low his brow should move, and how big his reactions should be, because its all got to be believable, even though the characters are four or five inches high. Wallace & Gromit: Vengeance Most Fowl expands upon the relationship between the title characters. The thing that stands out to me the most is weve pushed Wallaces and Gromits relationship more than ever before, Crossingham remarks. It used to be a joke that Wallace never learns anything. Actually, in this film he does look at himself a little bit. We have pushed him emotionally in a very British way. What that does is open up a bigger feeling within the film, and hopefully by the end, it fills your heart with warmth.Since the last Wallace and Gromit outing in A Matter of Loaf and Death, further advances have been made in the armatures for the puppets.An actual projector found by Dave Alex Riddett was utilized to project the computer code onto Norbot.The simple graphic design for Feathers McGraw was maintained from The Wrong Trousers.Editing the scene when Wallace goes through the enclosed slide located on the side of his house.Director Merlin Crossingham has been given the responsibility of being the guardian of a beloved stop-motion franchise that began as college short by Nick Park, with both of them serving as directors on Wallace & Gromit: Vengeance Most Fowl.The lighting schemes for Feathers McGraw was inspired by how the head housekeeper, Mrs. Danvers, was treated in Rebecca by Alfred Hitchcock.Initially, Norbot had organic moments which made him too human, so a minimal approach was adopted to make his presence more robotic.Before the toy-train set mayhem finale in Ant-Man, there was Gromit trying to catch Feathers McGraw in The Wrong Trousers. It was a small crew, Park reveals. We didnt know what we were doing. I wanted it to be the fastest thing Ive ever seen in stop motion and be a totally madcap Tom & Jerry cowboy-style, train-top chase but all within the living room. The cinematic achievement would not be done practically today. Weirdly, some of the technological advancements we have in the studio would mean that sequence would not happen today, Crossingham notes. I wasnt on the film, but I know how Nick and Dave Alex Riddett achieved it. It was inventing a technique with a basic motion control equipment they had and actually moving it by hand. I dont think todays digital cameras would handle the blurring; Go motion, and the rough and ready nature that gave it that visceral energy. Its a product of its time. Nowadays, it would be executed digitally. They would try to smooth it all out and calculate everything, Park remarks. Its that handmade nature that made it erratic and imperfect.Lighting tricks made it look like there was fire present on the Cellar Cauldron set.Whereas The Wrong Trousers featured a fast chase, Nick Park decided to head in the opposite direction with the canal scene where the action moves at four miles per hour.Gromit remains a hard character to animate, with animators going through a three-week refresher course focused entirely on him.The canal chase trades speed for slowness. Gromit hotwires the barrage, and we do a Mission: Impossible, Fast & Furious buildup to do a four-mile-per-hour chase! Park says. Pulling off that gag I find satisfying. The climax is a highlight. It merges with Gromit going in and out of light, Crossingham states. It goes from this small world into a scale that is audacious for a stop-motion film, and I love that fact that we managed to pull it off. One of the gags that makes me giggle, which after you made a film for four or five years not many make you do, its where Wallace says, I wouldnt bother with that teapot. It doesnt work! Digital effects helped to expand the universe. Thats great, Park notes. But as artists you always have to work with parameters because thats when you get inventive with how you shoot it, how you make the gag or drama work more effectively. Having the whole universe at your fingertips isnt necessarily a good thing for the artist. There are always time and budgetary restrictions as to what they can do in CG. The last scene Merlin talked about when we suddenly go spectacular over the valley and see the aqueduct, we were limited by how high. But digital effects allowed us to get the camera right up and to extend the legs of the bridge as high as we wanted. The valley itself was real. It was a whole mixture of different techniques.0 Comments ·0 Shares ·227 Views
-
BUILDING A BETTER MAN WITH WT FXwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Wt FX and Paramount Pictures.At first glance, having Robbie Williams depicted as a primate in Better Man comes across as a novelty rather than inspired creative choice, but there was a method behind the madness of Michael Gracey, who was previously responsible for The Greatest Showman. The visual effects artist turned filmmaker wanted to show the British popstar as he sees himself rather than his public persona. Wt FX was hired to produce a believable CG protagonist that causes the animal persona to quickly dissipate and allows audience members to get lost in his personal struggles with addiction, mental illness and fame.He had a scream. It was like, Lets really peel those lips back and get those canines out. The nose wrinkles. Lets make it feel primal. But it was never like a full departure. We never went quadrupedal or monkeyish behavior. It helps because you get swept up with the character and story. We dont want people to go, Im looking at a monkey. Dave Clayton, Animation Supervisor, Wt FXThe costumes and fur for Robbie Williams were treated as if they were characters in their own right.When people hear the premise, they get a picture in their head about what this movie is going to be, and then they see the film and the film never matches that picture, notes Luke Millar, VFX Supervisor at Wt FX. They think it will be comedic and caricatures, but there are a lot of components that layer in the character the cinematography, the gritty British backdrop and the fact that Robbie is the only digital thing for the majority of the movie. Better Man has a lot of heart and emotion; thats what sweeps you up. The ape metaphor falls away in the first few shots or scenes. It was a fine line to get the proper onscreen performance. There is an uncanny valley that we were careful not to fall into. Robbie Williams is represented as an ape in the film, but, essentially, he is human in the way he interacts, wears clothes and the style of his hair. Compared to the previous films that weve done, this is probably one of the biggest differences, Millar notes.Three different versions of Robbie Williams had to be produced. Hes a young lad, a teenager and a young man, remarks Dave Clayton, Animation Supervisor at Wt FX. Within there, we had a little bit of give to fill in the blanks. We previsualized a lot of this movie to help not only get into all of the details of how we were going to shoot it but also to help describe the entire arc of the movie and how this character was going to evolve. Its a long journey to get the most out of a character like this; hes had a complex life. Ape characteristics were generally avoided. Its not to say that we didnt have monkey mouth shapes or do a little bit more or less here and there to help things, Clayton states. He had a scream. It was like, Lets really peel those lips back and get those canines out. The nose wrinkles. Lets make it feel primal. But it was never like a full departure. We never went quadrupedal or monkeyish behavior. It helps because you get swept up with the character and story. We dont want people to go, Im looking at a monkey.A new technology was developed by Wt FX that enabled their VFX pipeline to receive data files from concert stage lightboards that allowed them to accurately recreate the lighting digitally.As we do a take, our on-set editor, Patrick Correll, would literally take that take and cut it into the timeline, switch out the previs, and then make sure that we got the camera timing, beats and action lined up. Then we went again. For each setup we would do 30 to 40 takes to try to get the perfect take. The nice thing about this was after shooting we already knew that the things were going to work. Luke Millar, VFX Supervisor, Wt FXAll of the musical numbers were previs. As we do a take, our on-set editor, Patrick Correll, would literally take that take and cut it into the timeline, switch out the previs, and then make sure that we got the camera timing, beats and action lined up, Millar explains. Then we went again. For each setup we would do 30 to 40 takes to try to get the perfect take. The nice thing about this was after shooting we already knew that the things were going to work. The Rock DJ scene was 5,334 frames long and featured five costume changes for Robbie and 500 dancers. Michael didnt want any obvious wipe points such as someone walking in front of the camera right in front of it, Millar remarks. We always tried to do it in a way that we could have some kind of continuity of movement going over the stitch like a digital bus would drive through. Everything was handheld with a little bit of crane work. No motion control work. One of the biggest challenges was getting that single cohesive camera, and it was further complicated by the fact that the two interiors were shot in Melbourne about a year before we shot on Regent Street. We always had to dovetail into those interiors and then back out onto the street again.The wide shots of Albert Hall were filmed with a real audience in London while the orchestra pit was filmed at Melbourne Dockland Studios with 200 extras. The footage from the two locations was combined to create an audience of 5,500.An internal battle literally and figuratively takes place. In Let Me Entertain You, Robbie is having an internal struggle where hes literally battling with himself, Clayton states. They were small versions of Robbie, but armies of them. Its using MASSIVE, but also our motion edit team to put together an army and some simulation tools for the tight-quarters characters getting jostled around by each other. Another fully CG scene occurs underwater for Come Undone. Robbie ends up in a surreal moment under the water, and there are these suicidal teen girls who are upset about him leaving [boyband] Take That. We did some motion capture of using a rope rig to get the performers suspended up in the air pretending to swim, Clayton remarks. We got some good movement there. Then we augmented that. We looked at some references of underwater sports like underwater hockey or rugby to see people struggling against each other.Everything was handheld with a little bit of crane work. No motion control work. One of the biggest challenges was getting that single cohesive camera, and it was further complicated by the fact that the two interiors were shot in Melbourne about a year before we shot on Regent Street. We always had to dovetail into those interiors and then back out onto the street again. Luke Millar, VFX Supervisor, Wt FXCentral to making the primate version of Robbie Williams a believable character was the motion capture performance of Jonno Davies.Atmospherics are in almost every scene. It makes things more complicated in terms of integrating Robbie because theres always these elements that are over the top of him, Millar observes. The dry ice was another level on top of that. The special effects team did a fantastic job of pumping tons of this stuff that they had, but it disappears so quickly. It was the same with the flame mortars in front of the stage. They set those up but we couldnt fire them all up because there was a crane that swept over the top of them. We were replacing half of them and patching in dry ice. However, when you have a real component onstage and were matching up to it, then youve got a great visual goal.In Let Me Entertain You, Robbie is having an internal struggle where hes literally battling with himself. They were small versions of Robbie, but armies of them. Its using MASSIVE, but also our motion edit team, to put together an army and some simulation tools for the tight-quarters characters getting jostled around by each other. Dave Clayton, Animation Supervisor, Wt FXVFX artist turned filmmaker Michael Gracey directs Raechelle Banno while shooting Better Man.At home, Robbie finds solace in his grandmother Bettys (Alison Steadman) support. Robbie Williams is represented as an ape in the film, but he is human in the way he interacts.The world-building offered something different compared to previous projects. Im proud of the scene when Robbie is on the billboard right at the start with his mate, Millar remarks. Normally, we have to build beautiful vistas and epic landscapes, and we ended up building a ring road like a dual carriageway with hideous sodium vapor lighting. Heaps of litter were digitally added. Not only litter but what would you find on the side of a motorway in the U.K. People would chuck some mattresses, shopping trolleys and beer bottles. Its that kind of rich patina of crap that is quintessentially British! That sort of world-building is an area that we never get to experience in the visual effects world. Its always creating these heightened realities, not a gritty down-to-earth reality.Raechelle Banno as Nicole Appleton and Jonno Davies as Robbie Williams performing Shes the One. Films like All That Jazz were an inspiration to director Michael Gracey because they didnt shy away from showing the darker moments and the truth that audiences respond to.At the Knebworth Festival in the U.K., a deteriorating Robbie Williams faces the nadir of his journey while performing Let Me Entertain You for 125,000 fans. The motion edit team put together an army of small Robbies and some simulation tools for the tight-quarters characters getting jostled around in the crowdWt FX was responsible for 1,968 VFX shots. The thing that I find with Better Man, now that weve finished it, is you can watch the whole movie and go, Of course, this was always going to work, Millar states. But I try to picture what was in my head right at the start of this whole process before we had done anything and you didnt know. I had complete faith and confidence in Michael Graceys vision in that we could realize it and make this world work. My first reaction when I heard of the film was, I dont get it, Clayton admits. But then I read the script and said, Yes. I would love to work on this because it sounds unique and different. I loved it. It has been such a special project to be a part of. Ive always thought this was going to be a once-in-a-lifetime, but maybe there are some filmmakers out there who want to innovate like this. I cant wait to see what everybody thinks of Better Man.Watch a behind-the-scenes VFX featurette with director Michael Gracey, Robbie Williams and Wt FX on how Better Man came to life. Click here. https://www.instagram.com/reel/DDxOxGnPTvm/0 Comments ·0 Shares ·228 Views
-
VIEWING TOLKIEN THROUGH ANIME FOR THE LORD OF THE RINGS: THE WAR OF THE ROHIRRIMwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Warner Bros. PicturesMany attempts were made to bring The Lord of the Rings to the big screen with the most famous being John Boorman co-writing a script in 1970 that laid the foundation for his Arthurian classic Excalibur. Escape from development hell finally happened in 1978 when Ralph Bakshi did an adaption that covered half of the trilogy, which combined roto and hand-drawn animation. The franchise is returning to its animation roots with New Line Cinema and Warner Bros. Animation bringing The Lord of the Rings: The War of the Rohirrim to theaters under the direction of acclaimed anime filmmaker Kenji Kamiyama, who is making his first visit to Middle-earth after spending time in New Port City and Los Angeles in 2032 for Ghost in the Shell: Stand Alone Complex and Blade Runner: Black Lotus. The project takes place 183 years before the original trilogy helmed by Peter Jackson and revolves around the childhood friends Hra and Wulf becoming sworn enemies after their fathers engage in a hand-to-hand combat with each other resulting in one of them being killed. Serving as a producer alongside Joseph Chou and Jason DeMarco is Philippa Boyens, who developed the story with Jeffrey Addiss and Will Matthews and has been a frequent writing partner of Peter Jackson and Fran Walsh ever since The Lord of the Rings: The Fellowship of the Ring.When the studio came to us and asked, What about anime? this story seemed to fit what we were looking to do, which is to tell a standalone story with fresh characters who we had not come across before and didnt involve any dark lords or rings of power. When they told us that Kenji Kamiyama was interested in directing it, that got me even more excited.Philippa Boyens, ProducerHra (Gaia Wise), Helm Hammerhand (Brian Cox), Haleth (Benjamin Wainwright and Hama (Yazdan Qafouri) hold court in a scene that makes use of a live-action sensibility but in the style of anime.When it comes to horse riding, motion-capture footage was given to animators as reference so they could heighten the level of realism in their hand-drawn animation.From the beginning, the desire was to tell an animated tale that was a new take on Middle-earth while still having an air of familiarity. We did actually look at other forms of animation and other stories, states Producer Philippa Boyens while attending New York Comic Con to promote the feature with Kenji Kamiyama and Joseph Chou (who served as a translator for Kamiyama). When the studio came to us and asked, What about anime? this story seemed to fit what we were looking to do, which is to tell a standalone story with fresh characters who we had not come across before and didnt involve any dark lords or rings of power. When they told us that Kenji Kamiyama was interested in directing it, that got me even more excited. Shifting from television productions to a feature film did not radically change the animation process. A lot of anime directors set out to be a filmmaker, but in Japan the most freedom you get and the easiest path to directing is to go through the medium of anime, Kamiyama explains. Ive done a few films, but my prominent works have been series where the times demanded it; we either want more DVDs or episodes on streaming. But the filmmaking process and techniques are what I take to every single one of them when crafting and telling the story. When taking on this project [which started as 90 minutes and became 2 hours and 10 minutes], it was a different muscle that I had to use. I was so used to the 30-minute format that I had to change my way of thinking to be able transition into a two-hour format where you have a complete story told within that limited amount of time. Making a movie is like a 400-meter run while a series resembles a marathon. You have to pace yourself and hit your goal throughout that whole period. A movie is not a 100-meter dash, but you do need to move fast and get there by pacing yourself; it is quite an intense experience.Its mature storytelling and deserves a mature visualization. Thats one thing I wanted to do by taking on this film. It presented me with the opportunity to create that kind of visual and film that perhaps breaks the mold for anime in how its presented to audiences outside of Japan.Kenji Kamiyama, DirectorThe intention was always to do an animated film.There is no longer a divide between 2D and 3D animation as the best of both techniques are being creatively fused together in a large part by the growing popularity of adult animation. Animation targeted or appealing to adults is not something new for Japanese anime, Kamiyama observes. If you have seen my work, it has a lot of mature themes and storytelling. But there is definitely a shift in the West where theatrical animation is not just for kids but can also be enjoyed by adults as well. However, when you say animation, there is still this perception that it is for kids. But, for me, I believe that animation has the potential to elevate certain types of storytelling as a visual medium that can appeal to a general audience. One of the motivations for taking this film was this was a chance to do that and have that platform where the audience will be able to come in and at least check out how this kind of storytelling can be done by animation. Its not only about how attractive the drawing is or how good the art is, but its about how people are moving within the frames of the animation; that connects with the technique of how you would want to do this.Hra is modeled in a large part on owyn, with Mirand Otto, who played her in the original trilogy, serving as the narrator.An unconscious agreement exists between animation filmmakers and audience members. Explains Kamiyama, The reason why I took on motion capture and CG animation [is because] even though animation is not as detailed as live-action, audiences come in understanding that it does exist and fill in certain gaps. By adding some element of reality, it heightens the experience so much that it actually makes them appreciate what is happening on the screen even more. That is something which can be done differently from live-action where there is already so much visual detail. Its about how people move, how they get on a horse or how people put things in pockets. The depiction of those little things needed to be done by human hands, but in order to get it done by human hands you need the assistance of CG animation. For me to visualize, set the timing, decide on the camera lens and how the angle should be, all of that was done with motion capture data in an Unreal Engine setting. Then I would go and do the acting for certain actors in specific situations in CG animation. That would be provided completed as a film to the animators to use as a guide to create the hand-drawn animation. Tolkien does not write childrens fairy tales. Its mature storytelling and deserves a mature visualization. Thats one thing I wanted to do by taking on this film. It presented me with the opportunity to create that kind of visual and film that perhaps breaks the mold for anime in how its presented to audiences outside of Japan, Kamiyama remarks.You have this familiar landscape of that valley, which you have seen from the live-action films, but now set against the backdrop of the Long Winter] it has this beautiful, frozen quality. Even when you get a bit of sunlight coming through, you still see the sheets of ice on the side of the cliff faces, and that frozen quality that Kamiyama brings to it visually makes it feel different. We know that silhouette and have seen that long ramp before, but this is different. The subtle changes are beautifully done.Philippa Boyens, ProducerJ.J.R. Tolkien does not write childrens fairy tales, so a mature approach was taken towards the storytelling and the animation.Animation is not simply about drawing pretty pictures but also how characters move within the frame.The Lord of the Rings: The War of the Rohirrim takes place 183 years before the original trilogy helmed by Peter Jackson.Childhood friends Hra (Gaia Wise) and Wulf (Luke Pasqualino) become sworn enemies after their fathers engage in a hand-to-hand combat with each other resulting in one of them being killed.Assisting with concept design was the Wt Workshop.Serving as the narrative spine of the story is the father/daughter relationship between Helm Hammerhand and Hra.Even with the big battle sequences, it is the subtle details that immerse audiences in the world-building and characters.I believe that animation has the potential to elevate certain types of storytelling as a visual medium that can appeal to a general audience. One of the motivations for taking this film was this was a chance to do that and have that platform where the audience will be able to come in and at least check out how this kind of storytelling can be done by animation. Its not only about how attractive the drawing is or how good the art is, but its about how people are moving within the frames of the animation; that connects with the technique of how you would want to do this.Kenji Kamiyama, DirectorRather than have a well-known character as the protagonist, the decision was to explore the fate of the House of Helm Hammerhand and the ancient stronghold of Hornburg, which figures prominently in The Lord of the Rings: The Two Towers as Helms Deep, through the eyes of Hammerhands daughter Hra. Hra is in the story, so shes not created from whole cloth, Boyens explains. I examined a lot of the characters in that story who didnt necessarily out-live it. When you look at the story and what sparks the conflict, it centered around her. Who might actually see through this conflict to its final ending? You begin to realize that its this character, potentially. Hra is not without precedent within Professor J.R.R. Tolkiens world. Obviously, we are distinctively drawing upon owyn, but if you go back even further to the first Haleth, who was a woman, you could see that there is this tradition of quite strong women existing in the Rohirrim culture. Also, having an unnamed character does give you more freedom to be able to say, What happens to this character? Why is she so pivotal to this story? What is her relationship not only to her father but this antagonist? Once you start delving into that, the story came into focus fairly quickly. The thing that Kamiyama found and works beautifully on an emotional level is theres a level of father/daughter story in this that is interesting and beautifully played by Brian Cox and Gaia Wise.The Oliphaunt causes battlefield mayhem, just like in the original trilogy.Helms Keep is revisited, but in a different way. This is a much longer siege that were dealing with in this film, set against the backdrop of the Long Winter, which is an event that affected all of Middle-earth, and a lot of people suffered through it, Boyens states. For the Rohirrim, its one catastrophe on top of another. You have this familiar landscape of that valley, which you have seen from the live-action films, but now it has this beautiful, frozen quality. Even when you get a bit of sunlight coming through, you still see the sheets of ice on the side of the cliff faces, and that frozen quality that Kamiyama brings to it visually makes it feel different. We know that silhouette and have seen that long ramp before, but this is different. The subtle changes are beautifully done. Obviously, the gates are destroyed somewhere in this film because the gates are different than the ones you see in the live-action movies. Kamiyama pushed it even further with other settings, like Isengard, which we are so familiar with in connection with Saruman who inhabited it. Now were seeing it well before that, when the Orthanc is locked up, so what has risen around it are deserted guard towers, which have been taken over by another force of people. You get to see Isengard that was once familiar, but he uses it in a much different way. Then you actually get to see even more details in stuff that we didnt get to see in the live-action films. We get to spend more time in the stables, weirdly. We get to see Edoras alight, which is quite shocking when you see that happen. We get to see the ruins of that city. Again, familiar, but then with this twist that Kamiyama brings to it, which is so beautifully done and visually stunning. The mood that he is able to capture in the film, youve got to see it in a big cinema. Im telling you!0 Comments ·0 Shares ·236 Views
-
NEW TECHNOLOGIES MAKING THEIR MARK ON VFX AND ANIMATIONwww.vfxvoice.comBy OLIVER WEBBThere have been leaps in technological growth over the last few years in the VFX and animation industries. With new developments across the board in virtual production, real-time technologies and LED volumes, 2025 is looking bright, though the industry is still recovering from the effects of COVID-19 as well as the strikes. Further to the technological advancements, there are skills in demand and education available to those looking to delve into the industry. The skillset required for implementing new tools and techniques with a sharp eye on the future while navigating the present is pivotal to keeping up and staying ahead. Following is a variety of viewpoints from industry professionals crossing the street at the busy intersection of VFX and animation.Wt FX doesnt use generative AI. Instead, for several years, theyve been developing and utilizing machine learning, embedded into their artists tools, such as Wts facial performance and image-based refinement systems. Wt FX contributed effects to A Minecraft Movie. (Images courtesy of Warner Bros.)Richard Frances-Moore, Senior Head of Motion, Wt FX At Wt FX weve been innovating and using virtual production since the cave troll sequence in The Lord of the Rings: The Fellowship of the Ring. Virtual production as a visualization technique supporting live-action, animation and performance capture grew as a key part of our workflow on many projects, especially on the Avatar series. For a while, it seemed like virtual production was a no-brainer as it saved costs in post-production and provided a better on-set experience for directors and actors. Since then, practitioners and productions have learned more about the strengths and weaknesses of virtual production high demand for upfront investment, complex scenarios and technical issues can lead to unplanned costs, causing some productions to become a little more cautious. However, with more industry understanding and standards, the scope for successful uses is now stabilizing, and ICVFX has become another piece of the filmmakers toolkit that will no doubt continue to develop and improve. We are continuing to push on our overall virtual production system, working on the integration of our pre- and post-production workflows, as well as integrating Unreal Engine as an option so we can connect and collaborate with productions using that platform.[W]ith more industry understanding and standards [for virtual production], the scope for successful uses is now stabilizing, and ICVFX has become another piece of the filmmakers toolkit that will no doubt continue to develop and improve.Richard Frances-Moore, Senior Head of Motion, Wt FXSSVFX was nominated for an Emmy for its work on the FX series Shgun. SSVFX completed hundreds of shots to bring CGI Osaka in the 1600s to life. (Images courtesy of SSVFX and FX Network)AI and machine learning are having a large impact on the industry at the moment. While we dont use generative AI, for several years, we have been developing and utilizing machine learning, embedded into our artists tools. Examples include our facial performance and image-based refinement systems, and demand for this technology is increasing. On set, AI allows us to provide real-time camera depth, improving the speed and quality of creating digital photogrammetry while reducing the footprint on set. Increased availability and the development of shared standards are also a big movement across the industry. With productions wanting the flexibility to share assets and work between facilities, this has led to the need for better platforms and systems like Open USD, MaterialX and even Unreal providing a form of shared format for some virtual production and previs workflows. We are embracing this wherever we can while maintaining flexibility and continuing to innovate where we still see there are opportunities for improving our work.Or-Bits is a 30-second TV commercial for a fictitious breakfast cereal. The AIE student led-production team was hired by a mock-client to deliver the spot according to a detailed brief. Some AI-generated still images were used in the mock-brief and appear in the commercial. All other work is 100% student-originated. (Images courtesy of AIE)Viktor Mller, CEO, Universal Production Partners (UPP)Is there any growth in terms of employment in the VFX industry? We are coming out of a disastrous time for our industry, and not just in VFX, but across the board. So many people lost their jobs last year. So many companies went under. Its a legitimate crisis. Despite that, were seeing more VFX in film and TV than ever. In a sense, youd think that would be good, but the films and shows you see now are overloaded with CGI elements and the number of CGI artists working on big projects is so gigantic that they cant even fit them all in the end credits despite the long end titles we have now.The AIE VFX Class of 2023 won the 2024 Seattle Film Festival prize for Best Animated Short for their senior project, Spare Parts. (Images courtesy of AIE)I would say that virtual production or AI programming is probably the fastest-growing field. But in the end, they will take jobs from other disciplines. I think everyone knows that were in what people are calling The Great Contraction. We had this boom-time of Peak TV and just tons of films being made for both theatrical and streaming, and now the bubble has burst and there arent enough projects, even compared to previous years. The craziness after COVID brought new talent into the industry, which is great news but, at the same time, it has overloaded the market. Another downside is that this saturation hasnt necessarily improved the quality in the VFX industry but has rather increased average capacity. As we move forward into the new normal, companies will need to focus on efficiency and develop really good pipelines to stay successful.At UPP, were always striving toward a more equitable and diverse workforce, and Im very proud to report that we have gender parity at our studio. Having a diverse group of artists and technicians has always just seemed like smart business to me, but, in our industry, women in particular are seriously under-represented, especially in technical and leadership roles. We have numerous women running various departments including the head of our film and TV department and while were happy to discuss every one of them, we also hope to get to a place where we dont need to talk about it at all because, frankly, it should just be nothing special. It should be the norm. But I think were gradually moving in the right direction as an industry.Jonathan McFall, Head of CG and VFX Trainer, National Film and Television School (NFTS)VFX has been an evolving industry since the beginning with pioneering and breakthrough technologies now making their way into summer blockbusters every year; however, a lot of the core departments and skills remain the same. A modeler makes the model, and an FX artist makes the FX, but things have become more sophisticated within each department with the introduction of USD into company pipelines, allowing for variants in the models, textures, lights, FX and more. Software like Unreal and Houdini are also playing a larger role in productions, and virtual production stages arent just used for the biggest films anymore. Therefore, key skills like understanding USD workflows, shotgrid, lighting, Houdini, virtual production, Unreal and, of course, comp will all help you to gain entry into the industry. These are highly in-demand skills.National Film and Television School (NFTS) VFX students in the U.K. working with a virtual production screen. (Image courtesy of NFTS).The industry may at one point have been more dominated by male figures, but these days it is becoming increasingly more diverse. The National Film and Television School strives to encourage and support people from all backgrounds to feel confident to pursue a career in visual effects, with outreach and over 1 million of funding available for U.K. applicants. There is such a wide range of skills that are needed across many roles ranging from very creative roles in the art department and texturing to very technical roles in FX or pipeline. There are jobs for anyone who wants to work in VFX!Sir David Attenborough tests the technology at Mars Volume in the U.K. (Image courtesy of Mars Volume and Bild Studios)The National Film and Television Schools (NFTS) Visual Effects (VFX) MA is renowned for the 360-degree practical training our students receive, learning creative problem-solving across the VFX pipeline. Our VFX Labs are cutting-edge with dedicated workstations that replicate those used in the VFX industry. Each machine has current industry-standard software, including Maya, Unreal Engine, Nuke, Nuke Studio, Photoshop, Substance, Houdini, Resolve, Reality Capture and PTGui. The students have access to the studio 24 hours a day and are part of a small cohort who receive personal and group tuition, gain experience on set and an understanding of the entire production pipeline. The expansive site, tucked away in a small Buckinghamshire village, offers 18 dedicated shooting spaces for students including a 7,000 sq. ft. main stage, a 4k television studio and virtual production volume.Rowan Pitts, Co-Founder, MARS Volume and Bild StudiosMARS is one of the U.K.s longest-running virtual production brands, delivered by the experienced workflow specialists at Bild Studios. Since its launch in 2021, MARS volume has become one of the bigger LED volumes in the U.K. (25.5 x 5m), with its facilities used by productions such as Netflixs 3 Body Problem, Disneys Culprits and Asif Kapadias 2073, and the MARS team has been on set delivering virtual production for HBOs House of The Dragon and Apple TVs Masters of the Air.Participating and collaborating on open-source software has positioned DreamWorks to more easily maintain and share assets across TV series and features such as The Wild Robot. (Image courtesy of DreamWorks Animation and Universal Pictures)Theres been a trend from bigger is better to recognizing that strategically-sized LED volumes are more cost-effective and work better overall for production. Thats how our business has channeled over the last few years. We are around a 50/50 split between scripted drama and branded content. We are based at our permanent virtual production facility in West London, but the team is mobile and can take virtual production services on the road with them as pop-up solutions. MARS volume is fully serviced, and we often get involved in the world-building of the digital environments for the shoots that come through. Our pop-up service involves building bespoke LED volumes for film shoots at other locations and providing a team to run and operate them as well.Virtual production certainly has an important place in the future of film production. As awareness grows around how a volume can be used, that unlocks producers ability to understand the true value gained with virtual production shooting, and the return that they will receive from shooting on a volume. While it may appear more expensive initially than a traditional studio, if you factor in the savings that could be made, both in terms of time and efficiencies, and in terms of how much content can be shot within a single day, then virtual production can come into its own. We now see experienced producers shooting on MARS volume, who can generate a huge scene throughput by carefully pre-planning and scheduling, proving the efficiency gains to be made are real.Bill Ballew, Chief Technology Officer, DreamWorks AnimationOne of the trends we have seen in the industry over the last few years is an increase in the participation and collaboration on open-source software. With the widespread involvement in the Academy Software Foundation (ASWF) by both studios and third-party tool creators, the viability and usefulness of these projects have increased manyfold. Today, these serve as a basis for nearly every studio, utilizing standard data representations and sharing functionality that has become ubiquitous. As an early adopter of the Linux operating system in the 1990s, to open-sourcing our proprietary renderer, OpenMoonRay, in 2023, we have seen tremendous value in leveraging and contributing to these best-of-breed community solutions. This gives us two major areas of benefit: 1) It allows us to focus our engineering on efforts that differentiate our films, provide unique creative looks and empower our artists, and 2) The common industry building blocks and standards are positioning us to more easily maintain and share assets across our TV series and features.Ed Bruce, Senior Vice President, SSVFXIt has always been the case that technology and creativity go hand in hand. Each year, new technology can help advance creative limitations or inspire the creativity to push further. Over the last few years, technical advances have been coming thick and fast. AI and machine learning are on a rapid growth trajectory bringing many new tools into vendors workflows and pipelines. It can be a challenge to navigate through to ensure a net benefit versus just adapting to change.Launched in 2021, MARS Volume is one of the more agile LED volumes in the U.K. The MARS team is mobile and can take virtual production services on the road. (Images courtesy of MARS and Bild Studios)Ultimately, our goal is about delivering beautiful storytelling imagery on time for the right price. If machine learning can play its part in that, then it is worth adapting to. At the moment, however, many of the tools can be a minefield to navigate regarding security, which is of key importance to us and our clients. Datasets must exist within the boundaries of a secure network, and any communication outside of that network comes with an incredible risk.At SSVFX, one area we specialize in is facial augmentation and de-aging. This is an area where weve seen AI advancements promising easier, faster and better outcomes. However, weve found that this is not the case with our proprietary tools and workflows that we continue to adapt and build upon. Weve found we still hit the quality, time and cost better than what the machine learning approach is currently delivering, plus there is no dataset or security concern and, more importantly, no infringement with actors contractual concerns regarding AI. The ability to make nuanced changes to service the clients direction can be very challenging with the current AI tools. This isnt an issue within the traditional approach.Every year, new tools and technology will be created to support the VFX team. Some will be adapted and become a staple within our workflows, others will fall away. Its important for us at SSVFX to continue to stay informed and adaptive to these advancements.Joseph Bell, Principal, HDRI ConsultingI think people were surprised by how slowly VFX work has started to pick up after last years SAG-AFTRA and WGA strikes. Focus on the strikes masked an enormous drop in project commissioning by the major streamers compared with the year before, driven in part by pressure on the streamers to reach profitability. To give just one example, Disney+ spent around 40% less on commissioning content in 2023 than in 2022. And 40% fewer productions started principal photography in the U.S. in Q2 2024 than in Q2 2022. This dramatic decrease in the amount of film and TV commissioning, with the U.S. being especially hard hit, had a huge impact on employment in the visual effects industry worldwide. A lot of skilled VFX professionals have been out of work for many months in some cases, more than a year. With Disneys streaming services reaching profitability in mid-2024 and Paramount close behind, production should finally pick up. As an industry, this is a great time to reassess how we operate and what we can do to make companies and careers more resilient for the future.Countries with strong domestic film industries and those that are less reliant on work from U.S. productions to stay busy, such as Japan, Vietnam and to some degree France, have been less impacted than countries where a large majority of the spend on high-end VFX comes from the U.S. studios.Joseph Bell, Principal, HDRI ConsultingUPP has become one of the most active and versatile visual effects and post-production houses in the Central Europe, and has contributed VFX to Gran Turismo (above) as well as Barbie, Conclave, Five Days at Memorial and Blade Runner 2049, on which UPP CEO Viktor Mller served as Visual Effects Supervisor. (Image courtesy of Sony Pictures)Its difficult to highlight growth when so many VFX professionals around the world have spent large portions of 2023 and 2024 without work. Countries with strong domestic film industries and those that are less reliant on work from U.S. productions to stay busy, such as Japan, Vietnam and to some degree France, have been less impacted than countries where a large majority of the spend on high-end VFX comes from the U.S. studios. We are seeing growth in the number of roles for which knowledge of RT3D tools like Unreal Engine is becoming important. Knowledge of these same tools equips VFX companies and artists to work on VR/AR/XR, gaming and immersive projects. Among the various new technologies currently making their mark on the industry, real-time 3D is going to have the most far-reaching and enduring impact.Andy Romine, Senior VFX Instructor, AIEThere continue to be perennial opportunities for junior artists in lighting and compositing roles, though skills like rigging, VFX and Creature FX are always in high demand. Theres also been a recent surge in opportunities for traditional VFX artists who have experience working in real-time, e.g. Unreal Engine, for use in previs and virtual production pipelines.We model a real-world studio environment at AIE, so in addition to learning the technical skills required for a career in the VFX industry, students are taught to work in teams to real and simulated client briefs, manage asset creation and deadlines, and develop an online portfolio they can use to apply for a job. Students have access to state-of-the-art workstations and are instructed in industry-standard software including Maya, Renderman, Houdini, Katana, Nuke and Substance Suite. Every week, our Lunchbox program brings in professionals to talk about their experience in the industry and provide our students with opportunities to network. We have regular contact with members of the Visual Effects Society that AIE sponsors who give Lunchbox talks, provide mentorship and detailed portfolio reviews.The ability to make nuanced changes to service the clients direction can be very challenging with the current AI tools. This isnt an issue within the traditional approach.Ed Bruce, Senior Vice President, SSVFXBecause of Seattles importance as a game development hub and our parallel game art track VFX students are exposed to real-time game engines like Unreal 5. Our students have gone on to work at a variety of VFX houses from boutiques to major studios. Weve also had graduates migrate to the game industry with their crossover skills. Recent notable projects from grads include The Mandalorian, Deadpool & Wolverine, Doom Patrol and Destiny 2. The VFX Class of 2023 won the 2024 Seattle Film Festival prize for Best Animated Short with their senior project, Spare Parts.0 Comments ·0 Shares ·231 Views
-
LAURA PEDRO EXCELS AT THE MAGIC OF MAKING EFFECTS INVISIBLEwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Laura Pedro.Laura Pedro, Visual Effects SupervisorEven though Society of the Snow thrust Laura Pedro into the international spotlight and garnered a nomination at the VES Awards as well as trophies from the European Film Awards and Goya Awards, she has been amassing an impressive portfolio that includes A Monster Calls, Way Down (aka The Vault) and Superlpez. Born and raised in Montgat, Spain, Pedro currently resides in Barcelona where she is a visual effects supervisor for El Ranchito.I did not come from an artistic family, but I always liked photography. When I was little, my father gave me his camera, and I began trying to tell stories with it; that is when I figured out this is maybe something for me. When I was 16, our English teacher proposed to us to make a project for the government that would give us money to learn English outside of Spain. My classmates and I decided to make a movie about the robbery of The Scream by Edvard Munch. We won the money to travel to Toronto and stayed there for a month to learn English and finish the movie. The intervening years have strengthened her self-confidence. The difference from then to now is I finally found my own voice.Photography teaches the fundamentals of making an image. I know a lot of things about visual effects, but in the end, its all about light, Pedro notes. If you dont know anything about light, its impossible to tell things with images. Its incredible how photography connects with everything. Originally, the plan was to study photography, not visual effects, at ESCAC (Escola Superior de Cinema i Audiovisuals de Catalunya), which is the alma mater of filmmaker J.A. Bayona. I had an accident during the first year of school; I lost all of the exams and couldnt specializein photography. Because of that, I decided to go for my second selection, which was visual effects. The schooling was beneficial as it provided an overall understanding of the filmmaking process. When I was studying, every week we did a short film, and I was either producing or doing camera or directing. That caused me to work with different teams and learn various things for five years. When I finished school, it was easy for me to start as a visual effects supervisor and compositor, and know what the director wants and help them with visual effects.Robert Zemeckis has left a lasting cinematic impression. There are a lot of movies that I cant get out of my mind, like Death Becomes Her and Who Framed Roger Rabbit. In my career, I normally do invisible effects, but when I have the opportunity to do comedies with visual effects, its fun for me to be part of that. Of course, you know that its fake, but they tried to do it in the most realistic way. Innovation can come from unlikely sources. InDeath Becomes Her, they developed how to do skin in CGI, then the same technology was used for Jurassic Park. For me, its interesting and cool that you are developing a technology not doing Titanic, but a comedy. Its awesome to have the chance to create new technology. This has never happened in Spain because the industry is small, but we have the opportunity in the films we make to take the technology from the big productions and try to use it in our smaller ones, and teach the producers in Spain that they can trust these new technologies and ways of working with visual effects.Pedro on a set for The Vault (aka Way Down), which is part of the supporting chamber located under the Bank of Spain.(Photo: Jorge Fuembuena)Its important to have role models. Now that we have the schools, maybe in 10 years I will work with more supervisors who are women. Its not about gender. Its more about the age you start doing visual effects or become avisual effects supervisor because Im 34 and other supervisors I know are 40 or 50. We are not of the same age and have different ways of thinking.Laura Pedro, Visual Effects SupervisorOver the past decade, a better understanding of visual effects has taken root in the Spanish film industry where digital artists are brought into pre-production to properly plan for complex shots rather than being seen simply as post-production fixers. Visual effects are in all of the productions, so its easy to work continually, Pedro states. There are more visual effects companies that do television commercials trying to upgrade to narrative fiction. Television and streaming have become a driving force in Spain. Here, the film productions are small, so television and streaming productions allow us to continue working through the year. Maybe you do a film and at the same time two TV shows.Filmic visual effects have made their way to the small screen. The difference when you do a project like Game of Thrones or Ripley is that theres a lot of work in pre-production trying to find the perfect design with the cinematographer and production designer in the most cinematic way, Pedro remarks. Other projects work faster. One has to be willing to adapt. In the end, every project, director and producer is different. Its like a new adventure. When I begin working with a new client, I need to have a lot of patience, try to understand and be faster because I only have three months. Normally, I work with visual reference found on the Internet or pictures or videos taken with my camera. I have this capacity to find what is exactly in the mind of the filmmaker with the reference that I have imagined and later start working with our team doing the concept.Pedro in the Andes in Chile while shooting the scenes where Nando and Canessa attempt to find help in Society of the Snow. (Photo: Quim Vives)For Society of the Snow, the art department built two fuselages and the previs department at El Ranchito helped director J.A. Bayona design each shot of the crash. (Photo: Quim Vives)Pedro participated as Visual Effects Supervisor in the Spanish interpretation of the superhero genre, which resulted in Superlpez (2018).In 2013, Pedro made her debut as a visual effects supervisor for Barcelona nit destiu (Barcelona Summer Night) by director Dani de la Orden, and she would reunite with him a decade later for Casa en flames (A House on Fire), which is currently the most viewed movie in Catalonia. A major career turning point occurred when the emerging talent was recruited by a prominent Spanish visual effects company. I was doing a short film for a Spanish singer named David Bisbal, and El Ranchito called me to begin working with them on A Monster Calls, Pedro recalls. Flix Bergs [Founder and President, El Ranchito] is my mentor, and I learned from him its better to start with the real world and plates, and after that begin working with CGI because that mixture works better for the human eye. Also, he gave me the power to say, No when Im on set.Pedro and Special Effects Supervisor Pau Costa with their Goya Award for Best Special Effects for Society of the Snow in 2024. (Photo: Papo Waisman)Visual Effects Supervisor Flix Bergs, Special Effects Supervisor Pau Costa and Pedro celebrate Society of the Snow winning the Goya Award for Best Special Effects in 2024. (Photo: Ana Beln Fernandez)Visual Effects Supervisor Flix Bergs and Pedro on the set of A Monster Calls (2016), which was their first project together. (Photo: Quim Vives)Pedro after winning her first Goya Award in 2019 for Superlpez. which was shared with Special Effects Supervisor Llus Rivera.Personal highlights include a childrens fantasy tale, a comic book adaptation and the recreation of a historical event. Its not common in Spain to do a movie about a superhero like Superlpez, Pedro observes. We had to build a robot that was 10 to 12 meters tall. Before Superlpez I worked on A Monster Calls where we needed to build a monster that was also 12 or 13 meters tall, so I knew how to film a movie about this difference of dimensions and create something really big. Society of the Snow is a movie that has entirely invisible visual effects. We achieved that by traveling to the Valley of Tears, doing all of the environments with real photography, managing all of this material and developing the tools to work with five vendors at the same time while maintaining consistency. It was a lot of work.Nowadays, the protg has become a mentor. Its important to have role models, Pedro states. Now that we have the schools, maybe in 10 years I will work with more supervisors who are women. Its not about gender. Its more about the age you start doing visual effects or become a visual effects supervisor because Im 34 and other supervisors I know are 40 or 50. We are not of the same age and have different ways of thinking. Patience is a key trait. The most important thing is to be yourself and talk about things. I continue to learn by reading books and watching films. I try to remain connected with the technology and new tools, but its completely impossible to know everything. Real-time and machine learning have introduced new ways of working. There is a good and bad way of using technology. We need to be calmer because we rely on each other in the end to do the things that we love, which in turn creates an emotional response from the audience.0 Comments ·0 Shares ·237 Views
-
WICKED IS A COLLAGE OF DIFFERENT LENSES AND TALENTSwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Universal Studios.Elphaba (Cynthia Erivo) and Glinda (Ariana Grande) get a private tour of the Royal Palace of Oz.In recent years, exploring the backstories of iconic villains has become more in vogue with the release of Maleficent, Joker and now Wicked, a Universal Pictures production that brings the Broadway musical adaptation of Wicked: The Life and Times of the Wicked Witch of the West by Gregory Maguire to the big screen. No stranger to musicals is filmmaker Jon M. Chu, who has been making them ever since he was a USC film school student, but this time around, the scale is a throwback to Hollywood classics such as The Wizard of Oz, with the added benefit of the visual effects industry, which didnt exist back then.There is this grandiose nature to Wicked, but from the beginning, we always wanted it to feel touchable and immersive, director Jon M. Chu explains. We wanted to break the matte painting of Oz that we have in our mind. What happens if you could live in it? What happens if you can touch the dirt and textures? Visual effects are extremely powerful to be able to do that. Of course, we worked hand in hand with building as well by planting nine million tulips and having a real train and Wizards head, but were all in it together.Jonathan Bailey as Prince Fiyero performs in front of the rotating university library set, which was constructed by the special effects team led by Paul Corbould.Massive sets were built. I firmly believe youve got to build as much set as you possibly physically can, or do as much for real as you possibly physically can because the real photography on that set informs visual effects on how everything should look, states Production Designer Nathan Crowley. That is fundamental. You cant just put a bluescreen up because youre going to get enough of that anyway. Youve got to try to balance it. The act of physical construction is extremely informative. Crowley says, The thing is, if you do a concept and dont build it, then you miss out on the art direction of making it. Doing concept art in 3D was imperative. We will build, paint and finish a 3D model and will deliver it rendered to Pablo Helman [Visual Effects Supervisor]. Pablo has to rebuild it because visual effects have to go into a lot more specific areas, but at least he knows what it should look like. We also go scout places, and even if we dont film that place, well say to Pablo and Framestore, which does a lot of the environments, Thats what we need it to look like. We need to go to the south coast down to Poole and Bournemouth and get that set of cliffs, and that becomes Shiz. Emerald City is a hard one because youre going much higher [digitally]. I would try to build enough below 50 feet so he would have textures.Cinematographer Alice Brooks stands behind director Jon M. Chu as he discusses a shot she captured with custom-made lenses by Panavision.Cynthia Erivo decided to go with practical the stairs, and it all becomes this one long Steadicam shot that makeup for the green skin of Elphaba, which was then finessed digitally in post-production.Special Effects Supervisor Paul Corbould built the The Emerald Express, which was designed personal motorized carriage for the Wizard of Oz.Unreal Engine influenced the shot design. Emerald City was the last set that was built and was behind schedule, states Cinematographer Alice Brooks. We had this idea for when Elphaba and Glinda get off of the train, and we start to push down the stairs, and it all becomes this one long Steadicam shot that ends on a crane that lifts up. We had been working on this for months but couldnt get into the set to start rehearsals because all of the construction cranes and painters were in there. What we did do was take the game controller, go into Unreal Engine and start designing the shot. When walking the set in Unreal Engine, we realized that this big stepping-onto-crane move didnt show off the city in any spectacular way; that being low was the way you saw the city in this amazing way. Then, we threw out our amazing Steadicam idea, which our A camera operator was bummed out about, and we created something new in Unreal Engine that was perfect.Glinda, portrayed by Ariana Grande, makes use of her bubble wand.This aerial shot of Munchkinland showcases the nine million tulips that were planted.Numerous production meetings were held to discuss how to deal with the green skin of the future Wicked Witch of the West, Elphaba, portrayed by Cynthia Erivo. We wanted to have all of the options on the table then work with Cynthia herself to know what she needed as an actor, Chu explains. We did a lot of tests with a double to show Cynthia real makeup, semi-makeup where you only do the main areas, and completely non-green makeup because we knew that makeup every day for that long of a shoot could be grueling and would also take away time from actually shooting. Cynthia was like, I need the makeup. Of course, there is some cleanup that we needed to do because sometimes her hands were thinner on certain days than others. The green skin had to look believable and work in any lighting condition. David Stoneman, who is a chemist who makes products for our industry, took my green design, which was from products called Creamy Air and Illustrator, and the discontinued product that I had found, and put three drops of yellow neon into the base, explains Hair Designer/Makeup Designer/Prosthetics Designer Frances Hannon. It reflected off the dark skin tone and made it look like it was her skin, not like it was green-painted on the surface, and more than that, it worked in every light.A lens flare, rainbow and the Yellow Brick Road are incorporated into an establishing shot of the Emerald City.The head of the Wizard of Oz was a massive animatronic puppet hung from the ceiling of the studio.Prosthetic makeup was required to show the characters of Boq (Ethan Slater) and Fiyero (Jonathan Bailey) being transformed into Tin Man and Scarecrow. One of my most important things was working with Mark Coulier [Prosthetic Makeup Designer] again, Hannon remarks. For Tin Man, we wanted to achieve something sympathetic because it should have never happened to Boq. In our story, Elphabas spell goes wrong in Nessarose [Marissa Bode]s office, and everything metal in that room attaches to Boq; his breast plate would be the tray on the table, and his hands become the thimbles, salt and peppers. Then, the visual effects took over because all the joints were blue. With Scarecrow, Jon and Mark particularly wanted to keep Jonathan Baileys face shape. We also kept his nice teeth and natural eye color for Scarecrow. I used contact lenses on Jonathan for Fiyero, so we had a nice change there. Then, for his head element, I put masses of gold blonde through his look as Fiyero, which carried onto Scarecrow in a straw-colored wig; that kept Fiyero attractive because Elphaba and he fall in love.Most of the 2,500 visual effects shots were divided between ILM and Framestore, with other contributors being OPSIS, Lola VFX, Outpost VFX and BOT VFX. The CG creatures were difficult because they also talk, but they are mainly animals, Helman remarks. They dont walk on two legs. If its a goat that talks and is a teacher, its basically a goat if you look at it, then he talks. It was a fine line stylizing the talking so that it doesnt feel like a completely stylized character, but also finding the expression, the eyebrows, eyes and mouth, the phonemes, and how articulate those creatures are. We had an animal unit of about 10 people or so that would play animals on set, and we would shoot a take or a few takes with them. We had a transformation scene where the monkey transforms and gets wings, so we had the whole animal unit performing and being directed by Jon. Sometimes, the second unit would stay there to shoot plates. Besides the music, dancers, choreography and huge sets, then there were the animals.The mandate was to capture as much in-camera, which gave Nathan Crowley the freedom to construct massive sets.Magic was always treated in a grounded manner. Its not a cutesy, glowing, sparkling thing, Helman notes. There is nothing wrong with those kinds of things; its just that this version of Oz is not magical. You have to remember, when you go back to the original story, the Wizard of Oz is not really a wizard. Creative solutions had to be applied to achieve the desired effect. How do you make a book glow without making it look completely fantastical and cartoony? Helman explains, Maybe what you do is provide a language inside of the book with words that may become golden that wasnt golden in the beginning. So, you see a transition between a word that is on a black ink parchment to something golden that produces a glow and is completely grounded. Broomsticks are a form of aerial transportation. We worked with the stunt department to get the center of gravity correct and to be able to move the actors around. Cynthia Erivo wanted to do her own stunts, so she did. All of that wirework was closely planned. There are two things: Theres the center of gravity and what the body is doing in the air, and the lighting. If we get those two things right then were fine, Helman says.Water was seen as the key method of transportation to Shiz.Elphaba (Cynthia Erivo) begins to master the art of flying a broom.An establishing shot of Shiz University.Elphaba (Cynthia Erivo) and Glinda (Ariana Grande) decide to take a more relaxed approach to flying by taking off in a hot air balloon.A major accomplishment was the practical realization of the Emerald City Express, which is the personal train of the Wizard of Oz. It was Nathan Crowleys vision, states Special Effects Supervisor Paul Corbould. We built the running gear and the track and motorized it. Its hydraulically driven. Construction clad it with all of the fiberglass panels. The motion was repeatable. The train could be programmed to drive to a particular spot, run down and stop at a position, and when told to start again, make its next move and run to the end of the track. You can take it back to the beginning and keep on doing that, remarks Special Effects Design Supervisor Jason Leinster. Equally impressive was the construction of the Wizards head. Jason reverse-engineered the scale model and changed the electric servos to hydraulic rams and a whole control system, Corbould explains. It progressed from that. The head was suspended from the ceiling of the stage. It was a civil engineering project to have something like that floating in the middle of space, Leinster notes. It was 22 axes and puppeteered by one person. Most of it was done live and was often changing.Anti-gravity architecture serves as the basis for Kiamo Ko, which is a castle located on the peak of Knobblehead Pike.Other complicated rigs included the rotating library. Because it was first up in the schedule, we built one wheel and ladder, and the dancers with Chris Scott [Choreographer] rehearsed on that one, Corbould states. As we built another one, they were rehearsing on that with more dancers, and we built a third one. It took three months. An amazing prop was the fountains. The petals opened up, and they wanted water to come out, Corbould remarks. Weve got these hydraulic motion bases, and in the middle is a slit ring that allows you to turn a rig round and round without winding the cable up. We had to take a slit ring off, which you normally run a hydraulic oil through, and put that on the fountain. It ruined it because we were running water through it; that was quite a challenge. A bricklaying machine gets pulled by a bison. There was no bison to pull it, so the machine was self-driven, Leinster reveals. You could sit back and steer it. We had a roadway of foam bricks rolled up inside, and as the machine drove forward, it unrolled the Yellow Brick Road. Eventually, it drove off into the sunset, being pulled by the bison. You probably wont realize that is an effect.Madame Morrible (Michelle Yeoh) has the ability to control the weather, so there is a cloud motif to her hairstyle.To convey the impression of a floating castle, the concept of anti-gravity architecture was developed. Kiamo Ko isnt just a castle, Crowley observes. Its a defiant emblem of a bygone era, a testament to the forgotten magic that once pulsed through Oz. Its architecture, though ancient, utilizes lost principles of levitation, defying gravity yet remaining grounded in a sense of order and purpose. The key to Kiamo Kos defiance lies not in defying gravity entirely but in manipulating it subtly. Imagine a series of inverted arches, their points reaching skyward. These arches wouldnt be perfect mirrors of one another; instead, they possess a slight asymmetry, a calculated tilt that interacts with the forgotten magic of the land, generating a gentle, constant lift. This subtle slant would also provide a visual cue, hinting at the castles orientation even from a distance. By incorporating these design elements, Kiamo Ko transcends the trope of a generic floating castle. It becomes a character itself, a silent testament to a forgotten age and a beacon of hope for Elphaba and Fiyeros new beginning.Skies played a major role in setting the proper tone for scenesThroughout the whole movie, there is this idea that the sun is always rising for Glinda (Ariana Grande) and setting for Elphaba (Cynthia Erivo).Jonathan Bailey plays the role of Fiyero, who goes on to be transformed into the iconic Scarecrow.Lenses were developed specifically for the production (which evolved into the new series of Ultra Panatar II) that were paired with the ARRI ALEXA 65 cameras. Jon told me that he wanted Wicked to be unlike anything anyone had ever seen before, and the photography needed to represent that, Brooks states. I was on the movie so early I was able to design them with Dan Sasaki at Panavision in Woodland Hills. We called them the Unlimiteds after Elphaba singing Unlimited in Wicked because at the time they didnt have a name. Those lenses capture all of the pictures that Nathan, Jon and I put together for so many months, and they wrap the light beautifully on our actors. Usually, youre matching close-ups on the same lens, but on Elphaba, we shot her on a 65mm lens and Glinda on a 75mm lens, and we matched the size, but those two lenses did different things to their faces. Oz is a different place, and something is a little bit off everywhere. Our A and B 65mm lenses were not the same. It was a collage of lenses. Each one had such a different characteristic, and that made our movie feel different. Elphaba even has one line in the movie that goes, Some of us are just different. Thats what we want our Oz to be.Musical numbers were as complicated to plan and execute as action sequences.Various animals are part of the faculty at Shiz University, with Peter Dinklage doing facial capture and the voice of Dr. Dillamond.Apple Vision Pro is an essential part of the editorial process. I am overseeing the edit in the Vision Pro, Chu explains. Instead of being trapped in a monitor on a desk, which isnt the most creative, I can be like I am in the room with Myron Kertstein [Editor] where Im walking around or sitting on the couch. We can do visual effects approvals there too. I can bring it on and draw with my finger where certain areas need to be improved or whatnot. Hannon looks forward to seeing everything being brought together. For me, its seeing those finishing touches. The sets were 60 feet high. then we would have bluescreen. I do believe Paul Tazewell [Costume Designer] and myself, to the best of our abilities, gave Jon the spectacular, extraordinary and timeless look that he was after.Wicked is spanning two movies, with the first one centered around the song Defying Gravity and the second song For Good. Its in two parts, but we shot the whole movie in one lifetime! Helman laughs. I look at every project as a traumatic project where you develop these scars and learn from those scars, but you wear them proudly. Teamwork reigns supreme for Chu. Each department can make everything, but the reality is that we need to work together to make the thing that none of us can make alone. I feel lucky to be working with a team at the highest level, with the bar at the highest place for us to cross. It has been an amazing journey.0 Comments ·0 Shares ·259 Views
-
STREAMING AND VFX: CULTIVATING THE ABILITY TO ADAPT TO CONSTANT CHANGEwww.vfxvoice.comBy CHRIS McGOWANShgun (Image courtesy of FX Network)Despite the lingering effects of 2023s writers and actors strikes, the streamers continue to disrupt the industry. Streaming has increased the demand for VFX work and accelerated the growth of all parts of the production and post-production industries, says Tom Williams, Managing Director of DNEG Episodic.Among the leading streamers, Netflix had 277.65 million paid subscribers worldwide as of the second quarter of 2024, according to Statista research, an increase of over eight million subscribers compared with the previous quarter, and Netflixs expenditures on content were expected to stabilize at roughly 17 billion U.S. dollars by 2024. Also, by 2024, the number of Amazon Prime membersin the United States was projected to reach more than 180 million users. In Q2 2024, the number of Disney+ subscribers stood at around 153.6 million, according to Statista, while the combined number of subscribers to Warner Bros. Discoverys Max (formerly HBO) and Discovery+ services surpassed 103 million. Apple TV+, Hulu, Paramount+ and Peacock are among the others with significant viewers.Such subscriber numbers have bankrolled a lot of visual effects and animation. Streaming has been a game-changer for the VFX industry. It has significantly increased demand. With platforms constantly producing new content, visual effects studios have more opportunities than ever before, comments Valrie Clment, VFX Producer at Raynault VFX Visual Effects & Environments. The rise of streaming has also shifted the focus from traditional films to high-budget series, which has diversified the types of projects we work on at Raynault. Jennie Zeiher, President of Rising Sun Pictures (RSP), remarks, The advent of streaming had a huge impact that were still feeling today, not only for global consumers,but studios, production companies, TV channels, post houses, VFX studios; the entire industry was impacted. [It was a major disruption in the industry] that changed how content was consumed.The Last of Us (Image courtesy of HBO)BUDGETS & MODELSStreaming changed the way the industry was divided up and took away market share from broadcast and theatrical, according to Zeiher. She explains, In 2017, RSPs work was still wholly theatrical. We predicted that over the course of that year, we would be progressively taking on more streaming projects and that the year following, our work would be distributed 50/50. This indeed played out, and it tells the story of how a disruptive change can affect a business model. Fast forward to today, the industry is more complex than ever, made more so by the fact that streaming opened up distribution to a global, multi-generational audience, which is more diverse than ever.Everyone is more budget-conscious at the moment, which is not a bad thing for VFX as it encourages more planning and the use of previs and postvis, which helps everyone deliver the best possible end product, Williams says. We are a technology-driven industry that is always moving forward, combined with incredible artists, so I think we will always see improvements in quality. Zeiher adds, I think studios are still trying to settle on their model. There are fewer big hits due to diversity in taste, and there are more risks around greenlighting productions at a higher price point. What made a hit five or 10 years ago isnt the same as it is today. Thereis more diverse product in the pipeline to attract more diverse audiences. The streamers are producing high-end series, but they are more concentrated to a handful of studios.3 Body Problem (Image courtesy of Netflix)House of the Dragon (Image courtesy of HBO)Foundation (Image courtesy of AppleTV+)The Lord of the Rings: The Rings of Power (Image courtesy of Prime Video)The Boys (Image courtesy of Prime Video. Photo: Jan Thijs)SHARING WORKProductions normally split work between multiple vendors, Zeiher notes. This work can be sensitive to timing and schedule changes. Therefore, VFX vendors need to have a plan on how they manage and mitigate any changes in schedule or type of work. Besides capability and the quality of the creative, this is the biggest singular challenge for VFX vendors and is the secret to a successful studio! Zeiher adds, Studios have always split work between multiple vendors, and only in limited scenarios kept whole shows with single vendors, and this continues to be the trend. The studios are splitting work among their trusted vendors who have the capability in terms of crew and pipeline to hit schedules and manage risks.The increase in work has meant that more shows than ever before are being shared between different VFX houses, so that will add to the cooperation. Being a relatively young industry, it doesnt take long to find a mutual connection or 10 when you meet someone else from VFX at an event, Williams says. Comments Wayne Stables, Wt FXs VFX Supervisor on House of the Dragon Season 2, Im not sure that Ive seen a big change [in businessand production models]. We bring the same level of creativity and quality to everything we do, be it for feature film or streaming, and use the same tools and processes. I approach it the same way asI would working on a film. I think episodic television has always pushed boundaries. I remember when Babylon 5 came out [and] being amazed at what they were doing, and then seeing that ripple through to other work such as Star Trek: Deep Space Nine.Fallout (Image courtesy of Prime Video)In Your Dreams. Coming in 2025. (Image courtesy of Netflix)The Wheel of Time (Image courtesy of Prime Video)HIGHER EPISODIC QUALITYWorking with the VFX studios, the streamers have set the visual effects bar high by bringing feature film quality to episodic television. Game of Thrones comes to mind despite starting before the streaming boom. It revolutionized what viewers could expect from a series in terms of production value and storytelling. Laterseasons had blockbuster-level budgets and cinematic visuals that rivaled anything youd see in theaters, Clment says. Netflix has also made significant strides with shows like Stranger Things, which combines appealing aesthetics and compelling storytelling, and The Crown, known for its luxurious production design and attention to detail. Also, series like Westworld and Chernobyl both deliver sophisticated narratives with stunning visuals that feel more like feature films than traditional TV. These are just a few examples, of course. The range of projects that have made a significant impact in the streaming world is vast.Zeiher also points out the streaming titles The Rings of Power, Avatar: The Last Airbender, Shgun, Monarch: Legacy of Monsters, Loki Season 2, Fallout [and] the Star Wars universe, with recent series such as Andor, Ahsoka and The Acolyte as having brought feature-film quality to episodic. Stable comments, As the techniques used on big visual effects films have become more common, we have seen more high-end work appear everywhere. Looking at work in Game of Thrones and then, more recently, Foundation and through to shows like Shgun. And, of course, I am proud of our recent work on House of the Dragon Season 2, Ripley and The Last of Us.EXPECTATIONSThe expectation of quality never changes showrunners, writers and directors can spend years getting their visions greenlit no one is looking to cut corners. We all want to do our best work, regardless of the end platform, Williams says. Regarding the delivery dates for series episodes, Stables comments, I havent ever found the timeframes to be short. The shows tend to be very structured with the fact that you have to deliver for each episode, but that just brings about a practicality as to what is important. As with everything, the key is good planning and working with the studio to work out the best solution to problems. Clment says,While the compressed timelines can be challenging, the push for high-quality content from streaming platforms means that we are constantly striving to deliver top-notch visuals, even within tighter schedules. This is always exciting for our team.Sakamoto Days. Coming in 2025. (Image courtesy of Netflix)A Knight of the Seven Kingdoms: The Hedge Knight. Coming in 2025. (Image courtesy of HBO)CHANGES IN THE STREAMER/VFX RELATIONSHIPI think that showrunners and studios are seeing that it is now possible to create shows that perhaps in the past were not financially feasible. So, we are developing the same relationships [with the streamers] that we have had with the film studios, seeing what we can offer them to help tell their stories, Stables states. Relationships can be reciprocal, or they can be transactional, Zeiher observes. In VFX, we very much operate in a reciprocal relationship with the studios and their production teams; its a partnership at every level. Our success is based on their success and theirs on ours.Knuckles (Image courtesy of Paramount+ and Nickelodeon Network)GLOBAL COOPERATIONStreaming is enhancing global cooperation among VFX studios by creating a greater need for diverse talent and resources. Clment says, As streaming platforms produce more content, studios around the world are teaming up to manage the growing amount and complexity of VFX work. Advances in remote work technology and cloud tools make it easier for teams from different regions to collaborate smoothly and effectively. Zeiher explains, RSPs work on Knuckles is a great example of global, inter-company collaboration. Instead of using a single vendor, the work was split between several, mostly mid-size, vendors. The assets were built to a specification and shared using Universal Scene Description, allowing asset updates to be rolled out simultaneously across vendors and providing a consistent look across the characters. Paramounts approach to Knuckles was very smart and could be indicative for future workflows.The Witcher: Sirens of the Deep. Coming in 2025. (Image courtesy of Netflix)VFX is a tumultuous industry and, off the back of the WGA and SAG-AFTRA strikes, weve entered a time of consolidation, says Zeiher. Studios, often backed by private equity, are acquiring small to mid-size studios. This is helping them to distribute work globally across many jurisdictions. Dream Machine is an example of this new collaborative model with its recent acquisition of Important Looking Pirates and Cumulus VFX, joining Zero, Mavericks and Fin Design. Likewise, RSP has its sister studios FuseFX, FOLKS and El Ranchito under its parent company Pitch Black; its a new form of global collaboration, mid-size studios, with different offerings across brands and locations who can collaborate under one banner.I think that the streaming distribution model was the first disruption, and that distribution continues to evolve, Zeiher comments. The production model may now be disrupted through the use of GAI. Combining the distribution evolution, audience consumer changes and using GAI in production, were in forlots more changes in the year(s) to come. Clment states, As streaming platforms experiment with new content formats and distribution methods, VFX studios will adapt to different types of media and storytelling approaches.0 Comments ·0 Shares ·257 Views
-
AI/VFX ROUNDTABLE: REVOLUTIONIZING IMAGERY THE FUTURE OF AI AND NEWER TECH IN VFXwww.vfxvoice.comBy JIM McCULLAUGHHere features a de-aged Tom Hanks and Robin Wright. Their transformations were accomplished using a new generative AI-driven tool called Metaphysic Live. (Image courtesy of Metaphysic and TriStar Pictures/Sony)The VFX industry is still in the formative stage of a revolutionary transformation, driven by rapid advancements in artificial intelligence (AI) and its tech cousins VR, Virtual Production, AR, Immersive and others. As we begin 2025, AI promises to redefine both the creative and technical workflows within this dynamic field. To explore the potential impacts and necessary preparations, a roundtable of leading experts from diverse corners of the global VFX industry brings insights from their experiences and visions for the future, addressing the critical questions.Q. VFX VOICE: How do you foresee AI transforming the creative and technical workflows in the visual effects industry by 2025, and what steps should professionals in the industry take today to prepare for these changes? Are we entering AI and Film 3.0, the phase where filmmakers are figuring out workflows that put together a string of specialized AI tools to serially generate an actual project? Still, lots of fear (era 1.0) and cautious experimentation (era 2.0), but most forward-looking are figuring out actual production processes.With the help of Metaphysic AI, Eminems music video Houdini created a version of Eminem from 20 years ago. Metaphysic offers tools that allow artists to create and manage digital versions of themselves that can be manipulated. (Images courtesy of Metaphysic and Interscope Records)Blue Beetle marked the first feature film where Digital Domain used its proprietary ML Cloth tool, which captures how Blue Beetles rubber-like suit stretches and forms folds and wrinkles in response to Blue Beetles movements. (Image courtesy of Digital Domain and Warner Bros. Pictures)A. Ed Ulbrich, Chief Content Officer & President of Production, MetaphysicBy 2025, AI will profoundly reshape the visual effects industry, enabling creators to achieve what was once deemed impossible. AI-powered tools are unlocking new levels of creativity, allowing artists to produce highly complex imagery and effects that were previously out of reach. These innovations are not only pushing the boundaries of visual storytelling but also drastically cutting costs by automating labor-intensive tasks and streamlining workflows.Moreover, AI will accelerate production and post-production schedules, transforming the entire filmmaking process. With AI handling time-consuming tasks, teams can focus more on the creative elements, leading to faster, more dynamic productions. To stay ahead, professionals should embrace AI, continuously learning and adapting to rapid advancements, ensuring they are prepared to harness these tools to their fullest potential. AI-powered filmmaking tools are like jet fuel for creativity.Fuzzy Door Techs ViewScreen in action from the Ted TV series. ViewScreen Studio is a visualization tool that enables real-time simulcam of visual effects while ViewScreen Scout is an app for iPhone. ViewScreen Studio visualizes and animates a complete scene, including digital assets, in real-time and for multiple cameras simultaneously. (Image courtesy of Fuzzy Door Tech)Harrison Ford transforms into Red Hulk for Captain America: Brave New World. (Image courtesy of Marvel Studios)A. Lala Gavgavian, Global President & COO, Digital Domain AI tools are already making strides in automating rotoscoping, keying and motion capture cleanup, which are traditionally labor-intensive and time-consuming tasks. In 2025, these tools will be more sophisticated, making post-production processes quicker and more accurate. The time saved here can be redirected to refining the quality of the visual effects and pushing the boundaries of whats possible in storytelling. AI has the possibility of being added to the artists palette, allowing expansion to experiment with different styles in a rapid prototyping way. By harnessing the power of AI, VFX professionals can unlock new levels of creativity and efficiency, leading to more immersive and personalized storytelling experiences.We are indeed moving into what could be considered the AI and Film 3.0 era. This phase is characterized by transitioning from fear (1.0) and cautious experimentation (2.0) to practical application.Filmmakers and VFX professionals are now figuring out workflows integrating specialized AI tools to create full-fledged projects. These tools can handle everything from pre-visualization and script breakdowns to real-time rendering and post-production enhancements. However, this transition is not without its challenges. There will be concerns about job displacement and the ethical implications of AI-generated content. To address these issues, the industry must adopt a balanced approach where AI augments human creativity rather than replacing it. Transparent discussions about the role of AI and its ethical implications should be held, ensuring that the technology is used responsibly.A. Brandon Fayette, Co-Founder & Chief Product Officer, Fuzzy Door TechBy 2025, AI is poised to significantly transform both creative and technical workflows in the visual effects industry. AIs impact is already evident in the entertainment sector, and it is set to become the standard for automating repetitive tasks such as shot creation and rendering. This automation is not limited to VFX; we cansee AIs efficiency in code generation, optimization, testing and de-noising audio, images and video. Technical workflows will become more flow-driven, utilizing AI to dynamically adapt and drive the desired creative results. This means AI will assist increating templates for workflows and provide contextual cues that help automate and enhance various stages of the creative process.AI is advancing rapidly, with new tools and techniques emerging almost daily. To stay ahead of these changes, VFX professionals should remain aware of new trends in AI and generative content.Continuous learning and adaptation will be crucial. However, the industry needs to establish standards and guidelines to ensure AI complements rather than compromises the artistic process. Our focus with the ViewScreen family of ProVis tools is on usingAI to support and enhance human creativity, not replace it. By improving processes across production workflows, AI can make jobs easier while respecting and preserving the craft and expertise of entertainment professionals.With GPU-accelerated NVIDIA-Certified Systems combined with NVIDIA RTX Virtual Workstation (vWS) software, professionals can do their work with advanced graphics capabilities from anywhere, able to tackle workloads ranging from interactive rendering to graphics-rich design and visualization applications or game development. (Image courtesy of NVIDIA)Examples of joint deformations before and after AI training shapes. (Image courtesy of SideFX)A. Nick Hayes, ZEISS Director of Cinema Sales, U.S. & CanadaThis past year, we have already seen fingerprints left by AI in both the technical and creative sides of the film industry.Companies like Strada are building AI-enabled production and post-production toolsets to complete tasks widely considered mundane or that nobody wants to do. In turn, this new technology will allow VFX artists and post-production supervisors more freedom to focus on the finer details and create out of this world visuals never seen before. I see this resulting in a higher grade of content, more imagination and even better storytelling.Recently, Cinema Synthetica held an AI-generated film contest. The competition founders argued that the use of generative AI empowers filmmakers to bring their stories to life at a much lower cost and faster than traditional filmmaking methods. Now, creatives can use software tools from companies like Adobe and OpenAI to create content from their minds eye by simply describing their vision in just a few sentences. In a way, the use of AI can be inspiring, especially for filmmakers with lower budgetsand less experience. In fact, in the next 12-24 months, we will see a surge of highly entertaining, imaginative content created by humans, assisted by AI.Character poses created in Houdini and used for AI training of joints. (Image courtesy of SideFX)Final result of posed character after AI training of joints, created and rendered in Houdini by artist Bogdan Lazar. (Image courtesy of SideFX)To stay ahead, professionals should embrace AI, continuously learning and adapting to rapid advancements, ensuring they are prepared toharness these tools to their fullest potential. AI-powered filmmaking tools are like jet fuelfor creativity.Ed Ulbrich, Chief Content Officer & President of Production, MetaphysicThere will be concerns about job displacement and the ethical implications of AI-generated content. To address these issues, the industrymust adopt a balanced approach where AI augments human creativity rather than replacing it. Transparent discussions about the role of AI and its ethical implications should be held, ensuring that the technology is used responsibly.Lala Gavgavian, Global President &COO, Digital DomainA. Neishaw Ali, Founder, President, Executive Producer, Spin VFXAI is set to transform the VFX industry by automating repetitive tasks, enhancing creativity and enabling real-time rendering. By staying up-to-date with AI tools, collaborating across disciplines, experimenting with new technologies and focusing on creative skills, professionals can effectively prepare for and leverage these advancements to enhance their workflows and deliver more innovative and compelling visual effects.We have been working with AI for many years in VFX and only now is it made available at a consumer level and poised to significantly transform both creative and technical workflows in the visual effects industry in several key areas such as: Concept Development Allows for visual ideation among the director, creative teamand VFX to solidify a vision in hours rather than weeks. It enables real-time alignment of the creative vision through text-to-image generation, a process not unlike Google image searches but far more targeted and effective.Automation of Repetitive Tasks Automation of repetitive and non-creative tasks such as rotoscoping and tracking will significantly reduce the time and effort required for these laborious processes thus allowing our artists to concentrate more on the creative aspects of the scene, which is both energizing and inspiring for them.Face Replacement AI is revolutionizing face replacement by enhancing accuracy and realism, increasing speed and efficiency, and improving accessibility and cost-effectiveness, allowing for high-quality face replacement for a wide range of applications. Proper authorization and clearance are necessary to ensure we do no harm to any likeness or person.Real-time rendering Though not only AI-driven, real-time rendering is most certainly changing the VFX workflow. As the quality of final renders becomes more photorealistic and AI-enabled technologies like denoising and upresing allow formore complex scenes to be scalable in software like Unreal Engine, the design and iteration process will accelerate. Changes can be instantly viewed and assessed by everyone.Steps for Professionals to Prepare: I believe one of the biggest challenges for some VFX artists and professionals is understanding that embracing AI does not mean sacrificing anything. Instead, it allows you to work smarter and more effectively, dedicating more time to creative tasks rather than monotonous, repetitive ones.A. Antoine Moulineau, CEO & Creative Director, Light Visual EffectsAI feels like the beginning of CGI 30 years ago when a new software or tool was out every week. There are a lot of different techs available, and its very hard to focus on one thing or invest in specific workflows. At LIGHT, we are focusing on better training artists with Nukes Copycat and new tools such as comfyUI. Up-res or frame interpolation are already huge time-savers in producing high-res renders or textures. AI like Midjourney or FLUX has already disrupted massively concept art and art direction; they play now a major part in the workflow. 2025 will be about animated concepts and possibly postvis if the tools mature enough to have the control required. Animating concepts with tools such as Runway 3.A major blocker for final use remains controlling the AI and the lack of consistency of the tools. As said earlier, there is so much happening now, that it is hard to keep up or rely on the tools to be able to integrate in a pipeline.I dont know if it will be for 2025, but I can see AI disrupting the CGI pipelines in the very short term; generative AI could replace traditional rendering in many scenarios and reduce the need for texturing or high-resolution modeling in the near future, specifically for wide environments. Lip-sync is also a process where AI is really shining and will disrupt traditional workflows in 2025.We will start seeing directors preparing an AI version of their films with an edit of animated concepts with music during the pitching/concept phase, especially for advertising. This is such a helpful process to understand and communicate their vision. Its kind of a Moodboard 3.0, and I can certainly imagine this process becoming the norm very quickly. For very short-form social content, it will probably replace entirely traditional workflows. That being said, I think long-form remains an art form where actors and performance remain central, and I dont see AI taking over anytime soon. It is hard for me to see the point of that. We need real people to identify with so we can connect to the content. Art is about the vision; it captures society and the world as it is in the time it is made. In other words, AI remains a gigantic database of the past, but we still need the human creation process to create new art. A good example is, AI wouldnt be able to generatea cartoon version of a character if someone hadnt invented cartoon previously. It will accelerate processes for sure but not replace them.A. Christian Nielsen, Creative Director, The MillPredicting the future is challenging, especially given AIs rapid advancement. However, I anticipate an increasing integration of AI tools into the VFX pipeline. Were already seeing this to some degree with AI-powered rotoscoping and paint tools, which address some of the most common repetitive tasks in VFX.Additionally, inpainting and outpainting techniques are emerging as powerful tools for removing elements from shots and creating set extensions. ComfyUI has already become an integral part of many AI pipelines, and I foresee its integration expanding across most VFX studios.I strongly recommend that everyone in the VFX industry familiarize themselves with AI to better understand its capabilities and implications. The integration of AI into VFX is both inevitable and unstoppable.AI is advancing rapidly, with new tools and techniques emerging almost daily. To stay ahead of these changes, VFX professionals should remain aware of new trends in AI and generative content. Continuous learning and adaptation will be crucial. However, the industry needs to establish standards and guidelines to ensure AI complements ratherthan compromises the artistic process.Brandon Fayette, Co-Founder & Chief Product Officer, Fuzzy Door TechIn a way, the use of AI can be inspiring, especially for filmmakers with lower budgets and less experience. In fact, in the next 12-24 months, we will see a surge of highly entertaining, imaginative content created by humans, assisted by AI.Nick Hayes, ZEISS Director of Cinema Sales,U.S. & Canada[O]ne of the biggest challenges for some VFX artists and professionals is understandingthat embracing AI does not mean sacrificing anything. Instead, it allows you to work smarter and more effectively, dedicating more time to creative tasks rather than monotonous, repetitive ones.Neishaw Ali, Founder, President, Executive Producer, Spin VFXI can see AI disrupting the CGI pipelines in the very short term; generative AI could replace traditional rendering in many scenarios andreduce the need for texturing or high-resolution modeling in the near future, specifically for wide environments. Lip-sync is also a process where AI is really shining and will disrupttraditional workflows in 2025.Antoine Moulineau, CEO & Creative Director, Light Visual EffectsTheres still progress to be made before full text-to-video tools like Runwayml Gen-3 or Sora can be used to create complete AI commercials or movies. The main challenge is the lack of precise control with AI. If a director dislikes a specific element in a shot or wants to make changes, theres currently no way to control that. As a result, AI tools are generally not very director-friendly. At present, these tools work best for ideation and conceptdevelopment, like how we use Midjourney or Stable Diffusion for still concepts. Initially, AI could be used for creating stock elements, but Im confident that OpenAI and others are working on giving users more control.Over the past 12 months, weve used AI for several commercials and experiences, learning as we go. This technology is so newin the VFX industry that theres little experience to draw from, which can lead to some long workdays.A. Mark Finch, Chief Technology Officer, ViconThe industry is going through considerable change as audience preferences and consumer habits have evolved significantly in recent years. More people are staying in than going out, tentpole IPs are reporting decreased excitement and financial returns, and weve seen a period of continuous layoffs. As a result, theres a lot of caution and anticipation as to whats next.In a transitional period like this, people are looking at the industry around them with a degree of trepidation, but I think theres also a significant amount of opportunity waiting to be exploited. Consumer hunger for new worlds and stories powered by VFX and new technologies is there, along with plenty of companies wanting to meet that demand.For the immediate future, I predict were going to see a spike in experimentation as people search for the most effective ways of utilizing these technologies to serve an audience whose appetite knows no bounds. Vicon is fueling that experimentation with our work in ML/AI, for example, which is the foundation of our markerless technology. Our markerless solution is lowering the barriers to entry to motion capture, paving the way for new non-technical experts to leverage motion capture in their industries.An example weve come to recognize is giving animators direct access to motion capture who historically would have only had access to it through mocap professionals on the performance capture stage, which is expensive and in high demand. This unfettered access reduces the creativity iteration loop, which ultimately leads to a faster final product that is representative of their creative dream.Theres a lot of excitement and noise surrounding the rapid growth of AI and ML-powered tech. Its impossible to look anywhere without seeing tools that encourage new workflows or provide enhancements to existing ones. A consequence of this is that you can fall into the mindset of, This is the way everything is going to be done, so I need to know about it all. When technology is moving so fast, you risk spreading yourself thin across a wealth of tools that are still finding their feet and may themselves be redundant, replaced or improved beyond recognition in the future.The best preparation comes from understanding the problem before the solution, in other words, identifying the obstacle you need to overcome first. You get this by focusing on people speaking to them about their challenges, researching those that exist across their industry in general, and gaining an understanding of why a certain tool, workflow or enhancement might exist.A. Paul Salvini, Global CTO, DNEGAI, especially machine learning, is poised to significantly impact the visual effects industry, transforming both creative and technical workflows. At DNEG, we are investing in the development of new AI-enabled tools and workflows to empower artists and enhance the creative process. For us, storytelling remains paramount so our use of AI is directed towards activities that provide better feedback for artists and deeper creative control.In terms of artist-facing tools, some of the areas likely to see early adoption of AI and ML techniques throughout 2025 include: Improving rendering performance (providing faster artist feedback); automating repetitive tasks; procedurally creating content; generating variations; and processing, manipulating and generating 2D images.AI techniques and tools are being increasingly used to generate ideas, explore creative alternatives and build early stand-ins for various locations, characters and props. As with all new tools, professionals can prepare by learning the basics of AI, and seeing how these tools are already being explored, developed and deployed in existing industry-standard packages.Some AI and ML tools work invisibly, while others require direct user involvement. An abundance of publicly available and user-friendly websites has emerged, allowing artists and the general public to experiment with various ML models to better understand their current capabilities and limitations.These new tools, while impressive, further emphasize the importance of human creativity, communication and collaboration. Our collective job of developing and bringing great stories to life remains unchanged. However, as our tools improve, we can dedicate more time to creative endeavors and less on mundane tasks. This is truly a better way to create better content.A. Christopher Nichols, Director, Chaos LabsMachine learning has been transforming the industry for years, so its nothing new to VFX artists. Especially when it comes to digital humans, rotoscoping, fluid sims and analyzing data/camera tracking information. AI will continue to take on a bigger piece of the workflow and replace a lot of traditional VFX techniques in time. The industry will just continue to adapt.Creating high-level content is going to become much more accessible, though. Soon, independent filmmakers will create shots that would have been the sole domain of high-end VFX houses. This will free the latter to experiment with more ambitious work. Currently, Chaos is trying to help artists get to LED screens faster via Project Arena and NVIDIA AI technology; youll likely see AI solutions become commonplace in the years ahead. Youll also probably see fewer artists per project and more projects in general, too, as AI makes things more affordable. So instead of 10 movies a year with 1,000 VFX artists on each movie, itll be more like 1,000 films with 100 names per project.The elephant in the room is generative AI. However, the big movie studios are reluctant to use it due to copyright issues. Right now, the matter of where the data is coming from is being worked out through the court system, and those decisions will influence what happens next. That said, I dont think an artist will bereplaced by a prompt engineer anytime soon. The best work you see coming out of the generative AI world is being done by artists who add it to their toolsets. You still must know what to feed these tools and artists know that better than anyone.I strongly recommend that everyone in the VFX industry familiarize themselves with AI to better understand its capabilities andimplications. The integration of AI into VFX is both inevitable and unstoppable.Christian Nielsen, Creative Director, The MillA consequence of [the rapid growth of AI] is that you can fall into the mindset of, This is the way everything is going to be done, so Ineed to know about it all. When technology is moving so fast, you risk spreading yourself thin across a wealth of tools that are stillfinding their feet and may themselves be redundant, replaced or improved beyond recognition in the future.Mark Finch, Chief Technology Officer, ViconOur collective job of developing and bringing great stories to life remains unchanged.However, as our tools improve, we can dedicate more time to creative endeavors and less on mundane tasks. This is truly a better way to create better content.Paul Salvini, Global CTO, DNEG[Y]oull likely see AI solutions become commonplace in the years ahead. Youll also probably see fewer artists per project andmore projects in general, too, as AI makes things more affordable. So instead of10 movies a year with 1,000 VFX artists on each movie, itll be more like 1,000 films with 100 names per project.Christopher Nichols, Director, Chaos LabsA. Greg Anderson, COO, Scanline VFX and Eyeline StudiosIn 2025, AI tools and technology are poised to significantly transform how visual effects are created, from automating the most mundaneof tasks to expanding the possibilities of the most complex visual effects sequences. Several compositing packages already incorporate AI-based features that greatly improve rotoscoping, tracking, cleanup speed and quality. These features will continue to improve in 2025, allowing artists to spend more time on the final quality of shot production. The ongoing and fast-moving development of generative AI tools and features will change the process, efficiency and quality of everything from digital environments to effects and character animation.From a technical and production workflow standpoint, AI will continue to optimize render processes, allowing for more iterations and leading to more convincing imagery that is faster and cost-effective. New tools will assist VFX teams in organizing, managing and accessing vast libraries of digital assets, making it easier for artiststo find and reuse elements across different projects. Data-driven insights will also allow AI tools to predict which assets might be needed based on project requirements.Overall, AI technology is poised to revolutionize the VFX industry next year and beyond, as weve only yet to scratch the surface of what will be possible. In preparation, anyone working in the VFX industry should lean heavily toward curiosity, continuous learning and skill development. Time spent experimenting with AI tools and technologies in current workflows will heighten the understanding of AIs capabilities and limitations. Additionally, while AI can enhance many technical aspects, creativity remains a human domain. Artists should focus on developing artistic vision, storytelling skills and creative problem-solving abilities.A. David Lebensfeld, President and VFX Supervisor, Ingenuity Studios and Ghost VFXIn 2025, we will see a continuation of idea genesis happening by leveraging generative AI tools. We will also find that our clients use generative AI tools to communicate their ideas by leveraging easy-to-use tools they have never had before. The sacrifice being controllability, but the benefit is ease of communication.Most of our studio clients have a real sensitivity to how AI is being used on their projects, and they want it to be additive to the projects versus a threat to the ecosystem. In the short term, generative AI will be used more as a tool for communication than it is for execution.Well continue to see AI-based tools in our existing software packages, giving both in-house and vendor tool developers and software developers room to expand their offerings. While AI advancements will continue to improve existing toolsets, they wont replace team members at scale, especially in the high-end part of the market.Looking ahead, I think the best professionals in our industry are already dialed in to developing toolsets and new technologies. Its always been the case that you have to be agile and stay aware of continual software and hardware developments. VFX is theintersection of technology and art; you must know and constantly improve both to stay competitive. Also on a professional level, I dont think well see meaningful changes in 2025 to how VFX final pixels get made at the studio side, for a multitude of reasons, two being a lack of granular control and sour optics.How people are talking about AI can often feel like a marketing trick. Everyone is using the same basic technology layer, and that always gets better as all boats rise. Like anything else, the people who know and leverage advanced technology the best and the most creatively will continue to win.A. Mathieu Raynault, Founder, Raynault VFXWhen I first thought about how AI might affect the visual effects industry, I felt both skeptical and anxious. But since I started in computer graphics in 1996, I havent seen anything with this much potential for exciting transformation.At Raynault VFX, AI is set to significantly boost our efficiency by automating routine tasks and letting our team focus more on the creative parts of our projects. Were a small team of 55 and creativity is at the heart of what we do. Weve started using AI to increase our productivity without sacrificing our artistic integrity. With seven full-time developers, were heavily invested in research and development, including AI, to improve our workflows.Looking ahead, I see AI enhancing our current tools, helping us keep control over the creative process and refine our work with client feedback. This blend of AI and human creativity is crucial because filmmakers will still rely on creative teams to bring their visions to life. Although theres some worry about AIs ability to create entire films or TV shows on its own, I think these tools wont replace human-driven filmmaking anytime soon.AI will certainly transform our workflows and could lead to shifts in employment within our industry. VFX artists will become more productive, able to deliver more work in less time, which might lead to a reduction in job numbers compared to pre-strike highs. For VFX professionals, integrating AI into their workflows is essential, yet its crucial to preserve and enhance our existing skills. In the field of concept art, for example, AI can assist in drafting initial designs, but the intricate process of refining these concepts to align with a directors vision will still require human expertise. Artists who can both direct AI and iterate while creating concept art themselves will be invaluable.In summary, Im quite optimistic. As we move toward 2025, adopting AI requires us to change our skills and approaches to stay competitive and innovative. As a business owner in the VFX industry, its incredibly motivating!AI technology is poised to revolutionize the VFX industry next year and beyond, as weve only yet to scratch the surface of what will be possible. In preparation, anyone working inthe VFX industry should lean heavily toward curiosity, continuous learning and skilldevelopment.Greg Anderson, COO, Scanline VFX and Eyeline StudiosHow people are talking about AI can oftenfeel like a marketing trick. Everyone is using the same basic technology layer, and that always gets better as all boats rise. Like anything else, the people who know and leverage advanced technology the best and the most creatively will continue to win.David Lebensfeld, President andVFX Supervisor, Ingenuity Studios and Ghost VFXWhen I first thought about how AI might affect the visual effects industry, I felt both skeptical and anxious. But since I startedin computer graphics in 1996, I havent seen anything with this much potential for exciting transformation.Mathieu Raynault, Founder,Raynault VFXI know some people look at AI and its use as being somehow catastrophic for our business but, at the end of the day, I think itll be just another tool in our arsenal and, used wisely, a great one. The faster artists and companies embrace it and learn to use it in their workflows, the better, and were already seeing that adaptation now.Viktor Mller, CEO, Universal Production Partners (UPP)A. Viktor Mller, CEO, Universal Production Partners (UPP)To some extent, AI has already begun to transform the industry.We see demonstrations of its growing capabilities almost on a weekly basis, and there seems to be a lot of fear around that.Honestly, Im not worried about it at all. I could sense it coming long before it started turning up in the media, which is why UPP has been quietly building out our VP and AI departments for the last six years.I know some people look at AI and its use as being somehow catastrophic for our business but, at the end of the day, I think itllbe just another tool in our arsenal and, used wisely, a great one. The faster artists and companies embrace it and learn to use it in their workflows, the better, and were already seeing that adaptation now.A. Kim Davidson, President & CEO, SideFXOver the past year, we have seen several advancements in AI in the visual effects industry and we expect this to continue in 2025. So far, the advancements have been more evolutionary than revolutionary. AI is not replacing creatives or the production pipeline butis greatly speeding up many of the more mundane tasks while not fully eliminating them yet. Tracking and rotoscoping are key examples of tasks that have been improved and sped up. We predict that 2025 will see more AI-based tools being used throughout the pipeline, with improved AI implementations andsome brand-new tools. These AI-enhanced workflows will include design concept, asset (model and texture) creation, motion stabilization, improved character animation and deformation (e.g. clothing, hair, skin), matching real-world lights, style transferring, temporal denoising and compositing.Of course, there will be improvements (and more releases) of prompt-based generative video applications. But for a variety of reasons we dont see this as the best workflow for creative professionals, certainly not the be-all and end-all for art-directed content creators. We believe in providing artists with AI/ML-enhanced toolsets to bring their creative visions to life more quickly and efficiently, allowing for more iterationsthat should lead to higher quality. We are at an exciting stage in the confluence of powerful hardware andAI-enhanced software where creative talent will be more important than ever and able to harness creative platforms to tell stories in truly extraordinary new ways.A. Dade Orgeron, Vice President of Innovation, Shutterstock2025 is here, but with generative AI technology moving so quickly, I think we can expect to see AI continueto transform the visual effects industry, particularly through advancements in generative video and 3D tools. As AI models continue to improve, we can expect notable enhancements in temporal consistency and reduced distortion, along with compositing tools to help seamlessly integrate AI-generated content into live-action footage or easily remove/replace unwanted people or objects. In the next wave of generative video models, complex mechanical devices and other intricate details will be represented with unprecedented precision, and advanced dynamics and fluid simulations will start to become achievable with generative video rather than traditional, time-consuming simulation engines. Will it be perfect? Maybe not in the next six months, but perhaps within the next year.To prepare for these advancements, VFX professionals should invest in upskilling themselves in AI and machine learning technologies. Understanding the capabilities, and particularly the limitations of AI-driven tools, will be essential. They should experiment with generative image and video technologies as well as 3D tools that leverage AI to streamline their workflowsand enhance their creative skills. Thats something at Shutterstock that we are actively enabling through partnerships with NVIDIA and Databricks. For instance, weve developed our own GenAI models to accelerate authentic creative output, all with ethically sourced data. Early adoption and a shift towards embracing new technologies and methodologies will enable artists and technicians to remain competitive and innovative in these rapidly evolving times.A. Gary Mundell, CEO, Tippett StudioThe big question is: What will AI mean to us in 2025? As we move through the Gartner Hype Cycle, AI seems to be transitioning from the Trough of Disillusionment into the Slope of Enlightenment, much like the early days of the .com era. AI is poised to bring a suite of tools that handle obvious tasks roto, match move, res up, FX but thats just the tip of the iceberg. Anything described by a massive database can use AI. If youcan articulate your prompts, and theres a database to train the answers, youre set. Forget influencers soon, prompters will drive production with AI-generated insights.By 2025, AI will fundamentally change VFX production. Imagine a system capable of generating an entire schedule and budget through prompts. AI could create a VFX schedule for a 1,200-shot project, complete with budgets, storyboards, 3D layouts and animatic blocking, all tailored to a directors style and the level of complexity. However, where todays AI falls short is in the temporal dimension it struggles with believable, complex animation. Current engines tend to produce flowy, slow visuals lacking continuity, and while many tools claim to address this, it will take time before AI excels at high-quality animation.At Tippett Studios, we leverage AI for previsualization, conceptualization and project management. Using TACTIC Resource, we integrate AI into planning and resource management, handling vast production data to predict outcomes and streamline workflows. As we move into 2025 and beyond, AIs data management capabilities will be key to future productivity and financial success, even as we await more advanced animation tools. As AI continues through the Peak of Inflated Expectations and towards the Plateau of Productivity, its role in VFX production will become increasingly significant.We are at an exciting stage in theconfluence of powerful hardware and AI-enhanced software where creative talent will be more important than ever and able to harness creative platforms to tell stories in truly extraordinary new ways.Kim Davidson, President & CEO, SideFXEarly adoption and a shift towards embracing new technologies andmethodologies will enable artists and technicians to remaincompetitive and innovative in these rapidly evolving times.Dade Orgeron, Vice President of Innovation, Shutterstock[W]here todays AI falls short is in the temporal dimension itstruggles with believable, complex animation. Current engines tend to produce flowy, slow visuals lacking continuity, and while many toolsclaim to address this, it will taketime before AI excels at high-quality animation.Gary Mundell, CEO, Tippett Studio0 Comments ·0 Shares ·252 Views
-
PAUL LAMBERT CROSSES THE ARTISTIC AND TECHNICAL DIVIDEwww.vfxvoice.comBy TREVOR HOGGImages courtesy of Paul Lambert, except where noted.Paul Lambert, Visual Effects Supervisor. A proud accomplishment for Lambert was creating the IBK Keyer, which is still used today in Nuke to deal with bluescreen and greenscreen plates.Nowadays, Paul Lambert is at the forefront of Hollywood productions as a visual effects supervisor, with memorable visual accomplishments being the dystopian Los Angeles cityscapes and the lead hologram character from Blade Runner 2049, the transition to the Moons surface in IMAX in First Man and the realism of the worlds of the Dune franchise. Ironically, the ability to combine art and technology, which has been the key to his success, originally made him an anomaly in the British education system. Forced to choose between the two, he initially decided to earn a degree in Aeronautical Engineering at the University of London. Upon graduating, Lambert realized that engineering was not his calling, so he took a job as a courier in London and studied sculpture as a part-time art school student. Frequently, deliveries for Salon Productions led to visits to Shepperton and Pinewood Studios, and eventually saw him hired by the company that provided editing equipment to the film industry.At Salon, I learned how to put together and fix Steinbecks, KEMs and Moviolas, Lambert recalls. I even had to go over to Moscow to fix a Steinbeck being used by [Editor] Terry Rawlings for The Saint. It was during this time that Lambert became aware of the digital transition in the film industry. Avid and Lightworks non-linear editing systems were starting to disrupt the industry. It was this digital transition that made me more aware of something called visual effects. The discovery was worth exploring further. SGI had a big old building in Soho Square and were running week-long courses under the name of Silicon Studios, where you could play with Monet, Flint [the baby version of Flame] and Houdini. I left Salon and did this course, which was amazing. A six-month odyssey of looking for employment came to an end when a runner at Cinesite went on a two-week vacation. They kept me because I was so enthusiastic and hungry for knowledge. It was at a time when you could jump ontothe graphics workstations, whether it be the Flames or Infernos or Cineon machines, at night in your own time. I taught myself. I was so hungry and focused. I had finally found what I wanted to do. It was a good balance of creativity and technical know-how. When I started at Cinesite, they had two Flames, and by the time I left I was the head of that department and we had seven.A portion of a seawall was constructed for Blade Runner 2049, with the action shot in a water tank in Hungary. (Image courtesy Warner Bros. Pictures)Why would you put up with these crazy deadlines or having to move around the world if you didnt truly love it? If you truly love something, youre going to come up with creative ways of doing things and participate in some of these beautiful movies.Paul Lambert, Visual Effects SupervisorFascinated by a proprietary compositing software developed by Digital Domain, Lambert had a job interview with the visual effects company founded by James Cameron, Stan Winston and Scott Ross. I added substantial pieces of technology to Nuke because by that time I had figured out the ins and outs of compositing, Lambert reveals. It was an obsession of mine of how an image comes together. Digital Domain was on the verge of commercializing Nuke but didnt have a keyer. I spent six months playing around with this idea of keying, came back to them and showed them this algorithm. It was the IBK keyer, and thats still in Nuke. Simplicity drove the programming process. What I cant stand as a compositor is when there is a node and its got 50,000 sliders in there. Nobody knows what those sliders do! Its trial and error. What I tried to develop is something simple but a process where, if you can combine these things in a particular way, you can work with bluescreens and greenscreens, which are uneven, and it gets you to a good place quickly. The irony is, now I tend to try not to rely on bluescreens or greenscreens!Director/writer/producer Denis Villeneuve, left, and Lambert on the set of Dune: Part Two. (Image courtesy of Warner Bros. Pictures. Photo: Niko Tavernise)Lambert celebrates winning an Oscar for Dune: Part One with his wife, Mags Sarnowska.Over 90 minutes of footage had to be created for the LED screens used for First Man. (Image courtesy of Universal Pictures)A major benefit of using the LED screens for the space and aerial scenes in First Man was the ability to capture reflections on the visor and in the eyes of Ryan Gosling, which are extremely difficult to achieve in post-production. (Image courtesy of Universal Pictures)After 12 years at Digital Domain, Lambert joined DNEGs facility in Vancouver in 2015 where he began his transition as a production visual effects supervisor starting with The Huntsman: Winters War. The size of the visual effects budget is only part of the equation for success. By the time we had finished First Man it was a $7 million visual effects budget, which is relatively tiny, but we came up with some incredibly creative ways to do stuff, Lambert remarks. We used a number of great techniques for the visuals. Doing a bigature and miniature for space work is ideal because you can control the light so that shadows are really hard. We used real 1960s footage for the launch, but we repurposed that footage with CG to make it more cinematic. Also, we utilized one of the first LED screens, but we had it up for six weeks with operators for a fraction of the cost of what it costs now. Ninety minutes of LED screen content had to be created. This is where my gray hair has come from! We did not take the visor off one single shot. We even got reflections in the eyes!Two fundamental elements have to be respected for a visual effects shot to be believable. Im going to try not to change the actors performance or the light because I know that changing the light with our current tools always looks a bit artificial, Lambert explains. Your eye will pick up on something which takes you out, and in our current environment people will say, Its bad CGI. No, its the fact that youve taken the natural balance of the original image and gone too far by changing the background to acompletely different luminance or trying to add a different light on the character. You see it all the time. Im sure you will be able to do it with generative AI soon enough where youre relighting or regenerating the image based on some form of transformer and diffusion model, but using current tools I try to avoid it. I would rather the continuity of a background be off rather than have a composite feel wrong. If I shoot something knowing that a background is going to be a certain background in post, then I try to have that screen be of a tone of luminance that Im going to put the background in. Hence the sand-colored backing screens on Dune: Part One and Two.Never underestimate the significance of having a clear vision. With Denis Villeneuve there is such a clarity of vision as to what he wants, so its a pleasure to work with him, and you dont do crazy hours and overtime, Lambert states. There isnt a mad rush. Its a sensible approach to things. There are hiccups along the way, but its not like you have to ramp up to 1,000 people towards the end because youre 5,000 shots short. For Dune, the concepts were the basis of what we built and photographed and what I ultimately created in visual effects. Blade Runner 2049 was a special project with Lambert working on behalf of DNEG. It was special to come into this world and see pure professionalism at work with Denis and [Director of Photography] Roger Deakins, and witness them shooting with a single camera all the time. He is also proud of his collaboration with Cinematographer Greig Fraser on Dune: Part One and Two. Greig uses a multitude of lenses and some were old Russian lenses. Hes totally into degrading and giving character to the image. Then, of course, I have to try to match these things! We have a good understanding of the way we work. Greig is given untold freedom in how he wants to do things, but when I need something, he listens and will adapt, he says.Lambert in the Mojave Desert near Edwards Air Force Base for the landing of the X15 in First Man.Moviemaking is becoming more accessible to the masses. Youll see the cream rise to the top like you always do in whatever industry, Lambert notes. You will have directors who have a vision and bring that forward. I keep reading and seeing this whole idea of democratizing our industry, and it will happen. It depends on whether we put guardrails up or not to help with the transition. Youll have different ways to visualize things. Youll have the ability to put your VR goggles on and enjoy the movie that you just created. Great films are built upon solid collaborations. Ive been lucky with my path so far in that Ive never had a bad experience with another HOD [head of department]. In the end, Im only successful if the photography that we have shot works and people have put their heart into it. If I get the best foundation that I can, then I can add to that and bring it to the final where the director will hopefully love it.Blade Runner 2049 marked the first time that Lambert collaborated with Denis Villeneuve as a facility supervisor at DNEG, and it resulted in him receiving his first Oscar. (Image courtesy of Warner Bros. Pictures)From left: Rebecca Ferguson (Lady Jessica), Director/Writer/ Producer Denis Villeneuve, Lambert and Production Designer Patrice Vermette on the set of Dune: Part Two. (Image courtesy of Warner Bros. Pictures. Photo: Niko Tavernise)First Man resulted in Lambert winning his second Oscar and his first as a production visual effects supervisor.Lambert joined Wylie Co. in 2021 as the Executive Creative Director and is currently working on Project Hail Mary with directors Phil Lord and Chris Miller as well as Cinematographer Greig Fraser. Im thinking on my feet on Project Hail Mary more than Ive ever done before because of trying to keep the camerawork and everything fluid, Lambert remarks. That means youre not clinically breaking up the shot into layers because what tends to happen is you lose some of the organic feel of a shot if you do this and that element. Im a big believer in having a harder comp which will always give you a better visual. Even with a trio of Oscars, his enthusiasm remains undiminished. Why would you put up with these crazy deadlines or having to move around the world if you didnt truly love it? If you truly love something, youre going to come up with creative ways of doing things and participate in some of these beautiful movies.0 Comments ·0 Shares ·218 Views
-
BANDING TOGETHER ONCE AGAIN FOR GLADIATOR IIwww.vfxvoice.comBy TREVOR HOGGAll images courtesy of Paramount Pictures.Lucius Verus (Paul Mescal) seeks vengeance against Roman General Marcus Acacius (Pedro Pascal).Not often does a film crew get to reunite two decades later to make a sequel that makes swords and sandals cool again, but that is exactly the case with Gladiator II where Ridley Scott collaborates once again with Production Designer Arthur Max and Special Effects Supervisor Neil Corbould, VES. Russell Crowe as Maximus is not returning to the Colosseum to wreak havoc on the Roman Empire; instead, the task has been given to his equally determined son Lucius Verus (Paul Mescal).It was an amazing experience to see the Colosseum back up again, states Corbould. It was like stepping back in time 20-odd years because it was an exact replica of what we did before. I felt that the first one was damn good. To revisit this period again and take it a step further was quite an incredible and daunting task. The scope has been expanded. We were using the same old tools, like physical builds and handmade craftsmanship, that we always did, remarks Max. Only this time around, the digital technologies have come into that world as well, and that enlarged and increased the scope of what we could do in the time and on budget. It has been a gigantic shift from the first one to the sequel.Visual ties still exist between the original and the sequel. We wanted people to be able to recognize the [different] world from the first to the second, Max notes. It was also opportunistic of us to try to use some of the earlier footage to blend in. We did that in flashbacks and in the live-action, where we produced some of the Gladiators original crowd footage. We tried to match the sets in actual detail, particularly in the arenas, both provincial and in the capital like the Colosseum set as closely as possible to the first one. There were changes, but they were subtle. That was a nod to economical filmmaking. Why waste the time shooting crowds cheering when you have it in the can already? We did a few of those kinds of things.From left: Stunt Coordinator Nikki Berwick; VFX Supervisor Mark Bakowski; DP John Mathieson; Prosthetics Designer Conor OSullivan; Director Ridley Scott and Head Stunt Rigger Christopher Manger (ground) discuss the gladiator battle featuring the animatronic rhino created by Neil Corbould and his special effects team.Ridley said, I want to have a rhino there. I spoke to Mark Bakowski [Visual Effects Supervisor] about it. I said, We can create a remote-drive gimbal rig underneath, which is completely wireless, with a little six-axis motion base, and a muscle suit that we put a textured skin on with the head and body. Then Ridley said, I want it to do 40 miles per hour and turn on a dime. That was like, Oh, Christ. Another thing! But we did it. It was powered by flow-cell batteries and used electric car motors. This thing was lethal.Neil Corbould, Special Effects SupervisorAlso coming in handy was the Jerusalem set from Kingdom of Heaven, which was repurposed as a Numidian coastal fort attacked by the Roman fleet. The technology of water software thank you, James Cameron and Avatar and other productions had evolved to such a degree of sophistication that it made sense. Also, a credit to Neil Corbould, who found an incredible all-wheel-drive remote-control platform that was used for transporting enormous pieces of industrial technology great distances, like cooling chambers of nuclear power stations. We had a couple of those to put our ships on. This is where we were innovative, Max states.The advancements in technology allowed for more of the Rome set to be built physically for Gladiator II than for the original film.Pedro Pascal, Ridley Scott and Paul Mescal share a light moment in between takes.The entrance arch of the Colosseum had to be enlarged to allow the ships to pass through.The Colosseum had to be constructed higher than the original to accommodate the CG water needed for naval battles.Corbould was inadvertently responsible for a cut sequence appearing in the sequel. He recalls, I was going through some of my old archive stuff of the original Gladiator and found the storyboards of the rhino. After the meeting finished, I said, By the way, Ridley, I found these. I put them on the desk and he went, Wow! This is amazing. Weve got to do this. And thats how the rhino came about. It was like, Oh, Christ, I didnt think he would do that! Then Ridley said, I want to have a rhino there. I spoke to Mark Bakowski [Visual Effects Supervisor] about it. I said, We can create a remote-drive gimbal rig underneath, which is completely wireless, with a little six-axis motion base, and a muscle suit that we put a textured skin on with the head and body. Ridley said, I want it to do 40 miles per hour and turn on a dime. That was like, Oh, Christ. Another thing! But we did it. It was powered by flow-cell batteries and used electric car motors. This thing was lethal. It was good and could move around. We didnt do it like a conventional buggy. We did it like the two-drive wheels were on the side, and we had the front and back wheels in the middle, which were stabilizing wheels. We were driving it like a JCB excavator around the arena; that, in conjunction with the movement of the muscle suit and the six axes underneath, gave some good riding shots of the guy standing on top of it.The gladiator battle with the rhino was revived for the sequel when Neil Corbould showed Ridley Scott the original storyboards.Not everything went according to plan, in particular the naval battle in the Colosseum. Life got in the way because of the strikes, remarks Visual Effects Supervisor Mark Bakowski, who was a newcomer to the project. The Colosseum was originally to be more wet-for-wet and less dry-for-wet. But it works well in the end. There is a speed of working that suits Ridley Scott; he shoots quickly and likes to move quickly. That worked, shooting it dry-for-wet because Ridley could get his cameras where he wanted, reset quickly and get the shots; whereas, there are more restrictions being in the proper wet kind of thing. When it comes to integration, I was wary of having too much of a 50/50 split where you have to constantly match one to the other. We had a certain style of shot that was wet-for-wet, as in someone falling into the water, or Neil did some amazing impacts of the boats where the camera is skimming along the surface. Those made sense to do wet-for-wet because there are lots of interactions close to the camera. The water went through a major design evolution. Bakowski adds, We started off looking at the canals of Venice as our reference for the Colosseum, and then we started to drift.Ridley was showing pictures of his swimming pool in Los Angeles and saying, Can you move it that way? It took us a while to find the look of the Colosseum water, but we got there in the end.We tried to match the sets in actual detail, particularly in the arenas, both provincial and inthe capital like the Colosseum set as closely as possible to the first [Gladiator]. There were changes, but they were subtle. That was a nod to economical filmmaking. Why waste the time shooting crowds cheering when you have it in the can already?Arthur Max, Production DesignerThere were times when DP John Mathieson had to coordinate as many as 11 cameras for action sequences.The Colosseum naval battle was a combination of dry-for-dry and wet-for-wet photography.We built the boats in the U.K., shipped them out and then assembled them there, which was the right thing to do because it was almost impossible to get that material in Morocco or the sheer quantity of of steel and timber we needed. We put them in 30 40-foot trucks going across the Atlas Mountains, and as they were arriving, we were assembling them. It was like clockwork. On the day when we were shooting, we were still painting bits. It was that close.Neil Corbould, Special Effects SupervisorThe attack on the Numidian coastal fort was shot using the landlocked Jerusalem set from Kingdom of Heaven, with boats moved around on self-propelled modular transporters (SPMT) and CG water added in post-production.Technological advances allowed for the expansion of Rome. We built much more than we did on the first one in terms of the amount of site we covered, Max explains. We went from one end to the other. CNC sculptures and casting techniques were expedited greatly on the sequel because we had the technology. The timeframe was compressed from getting a finished drawing or working drawing to the workshop floor and also being able to farm out digital files, not only to one workshop but to multiple workshops simultaneously, which increased the speed of production. To a large extent, we met the demands of Ridleys storyboards, but there was still a large amount of [digital] set extensions. The accomplishment was impressive. It was an amazing set to wander around, Bakowski states. We had like a kit of parts that we could dress in the background. Technically, a certain hill should be in a particular place. We established it in this shot, or a certain building should be at a specific angle. But if it didnt look good, of course, it moved because Ridley is a visual director; his work is like a moving painting every time, and we responded to that by trying to make everything beautiful, which was the main thing.Visual ties still exist between the original and the sequel, such as the Colosseum.The baboons fighting the gladiators was a complicated sequence that required multiple passes.Visual effects took over some tasks previously looked after by special effects. In Gladiator, we did a lot of launching arrows, but in this one, we didnt do any of that, Corbould reveals. It wasall Mark [Bakowski]. That allowed Ridley to shoot at the speed he did, which was good. I concentrated on the fires, dust and explosions in the city. But we only shot that once, with 11 cameras. A major contribution was the practical black smoke. Corbould describes, We were burning vegetable oil, and when you atomize it at high pressure, it vaporizes, ignites and gives you this amazing black smoke. Everyone smells like a chip shop! We had six of these massive burners that were dotted around the set, and then wehad to chase the wind. We would have some wind socks up or look at the flags. You had to try to anticipate it because of the speed at which Ridley works. We must have had 16 people just doing black smoke. We built a beach as well. Ridley said, Its supposed to be on the coast, and I want an 80-meter stretch of beach with waves washing the bodies onto the shore. We constructed an 80 x 80-meter set of beach. I made this wave wall, which was basically one section of the wall, but the whole 80 meters of it pushed in. Its a bit like a wave tank. We put a big liner in it and sprayed sand over the liner; that gave it a nice, majestic wash-up against the beach.ILM led the charge in creating the 1,154 visual effects shots, followed by Framestore, Ombrium VFX, Screen Scene, Exceptional Minds and Cheap Shot VFX. The baboons were a fun ride and tough, Bakowski remarks. The speed that Ridley likes to shoot is fantastic, but someone interacting and fighting with a troop of baboons does take some planning and thought to go into it. Its complicated business. Bakowski adds, He was generous in terms of letting us shoot the passes we wanted to shoot. In general, we kept to the logic that there were a couple of hero guys in the middle and a bunch of supporting baboons on the edge. We do one pass where we put all of the baboon stunt performers in there. Everyone would run around acting like baboons. After that, we pulled out the baboons that werent interacting with people because it was a nightmare with the amount of dust being kicked up. We did a pass with only the contact points and then a clean pass afterwards. It was a challenge to have it all come together.A character in its own right is the capital city of Rome.One of the major creative challenges was developing the look of the CG water.Nothing was achieved easily on Gladiator II. This is the most challenging project that Ive ever done given the scale and scope of it and the conditions under which we worked, Max states. We had the sandstorms in Morocco, and the idea of doing a naval battle in the desert had its problems. We had to keep the dust down, and the physical effects team was always out there with water hoses, and they were clever. They had water cannons to replicate physical waves coming over the bow of the ships. It was a lot of technology on an enormous scale. The boat scenes were the most complicated. Explains Corbould, I was probably one of the first people on the show with Arthur, and our prep period was quite short. We built the boats in the U.K., shipped them out and then assembled them out there, which was the right thing to do because it was almost impossible to get that material in Morocco or the sheer quantity of steel and timber we needed. We put them in 30 40-foot trucks going across the Atlas Mountains, and as they were arriving, we were assembling them. It was like clockwork. On the day when we were shooting, we were still painting bits. It was that close.The visual effects work was as vast as the imagination of Scott. Were doing extensions in Rome, crowds in theColosseum, creatures and water, Bakowski notes. For the final battle. We were adding vast CG armies in the backgrounds of virtually every shot. We did some little pickups as well, so its integrating these pickups that came back to the U.K. with the stuff that was shot in Malta. Its not groundbreaking stuff, but the volume of it is quite high because its one of those things that adjusts and adapts as the edit develops. The Colosseum naval battle encapsulated both what Im looking forward to people seeing and also a big challenge. The baboons were a fun challenge, and the rhino just worked, which was fantastic. By the end, we knew how we were doing in the Colosseum, and our crowds look beautiful. I cant wait for you to see all of it.0 Comments ·0 Shares ·228 Views
-
NEXT-GENERATION CINEMA: THE NEW STANDARD IS PREMIUM AND ITS WORKINGwww.vfxvoice.comBy CHRIS McGOWANGladiator II was given the IMAX Maximum Image treatment in November. (Image courtesy of Paramount Pictures)Cinema audiences are increasingly showing an appetite for higher-resolution, higher-quality movies, often with large-format presentations and/or 4D effects. Soon, they will also be exploring AR movie augmentations and an increasing number of film-related experiences as well.When audiences began returning to theaters after the pandemic, they wanted experiences that they couldnt get in their homes. Now, in a post-pandemic world, moviegoers want something premium and special for their time, and audiences seek out IMAX because it truly is a premium experience, says Bruce Markoe, Senior Vice President and Head of Post & Image Capture for IMAX.IMAX is a pioneer and leader in premium cinema. Markoe notes, As of June 30, 2024, there were 1,780 IMAX systems (1,705 commercial multiplexes, 12 commercial destinations, 63 institutional) operating in 89 countries and territories. The numbers speak for themselves. In 2023, IMAX delivered one of the best years in our history. with nearly $1.1 billion in global box office. And while last years Hollywood strikes dealt the entire entertainment business a temporary setback, it was just that: temporary. Weve had an incredible 2024 to-date at IMAX, marked by several recent box-office successes, including Inside Out 2, Twisters and Deadpool & Wolverine, and as we look ahead, this trend shows no signs of slowing down.Markoe explains, Every IMAX location in the world is built to our precise standards they are designed, built and carefully maintained with meticulous care, and every location is customized for optimal viewing experiences. Only IMAX uses acousticians, laser alignment and custom theater geometry, combined with our Academy Award-winning projection technology, precision audio and optimized seating layouts, to ensure every element is immersive by design.Joker: Folie Deux launched on IMAX in October. (Image courtesy of Warner Bros. Pictures)Markoe continues, Our theaters are calibrated daily to ensure audiences get perfectly tuned sound and image every time, at every location, regardless of where in the world it is. We also have incredible partnerships with filmmakers and studios. Were seeing a dramatic shift to IMAX among filmmakers and studios. We are increasingly creating specifically for the IMAX platform. [And,] we have more Filmed for IMAX titles in production than any time in our history, Markoe says. We are dramatically expanding our Filmed for IMAX program to feature many of the worlds most iconic filmmakers and directors alongside rising talents in the industry. To date, we have 15 Filmed for IMAX titles set for release this year more than double any previous year as filmmakers and studios from Hollywood and international territories increasingly create uniquely optimized versions for the IMAX platform.Markoe notes, To meet growing demand among filmmakers to shoot in IMAX, the company is developing and finalizing the roll-out of four next-generation IMAX 15/65mm film cameras. IMAX tapped such prolific filmmakers and cinematographers as Christopher Nolan, Jordan Peele and Hoyte van Hoytema, among others, to identify new specs and features for the prototype. The new cameras recently entered production.Dolby Cinema is a premium cinema experience created by Dolby Laboratories that combines proprietary visual and audio technologies such as Dolby Vision and Dolby Atmos. (Image courtesy of Dolby)The worlds largest 4DX theater is the Regal Times Square located at 247 West 42nd St. in New York City. (Image courtesy of Full Blue Productions and 4DX)IMAX launched the Filmed for IMAX program in 2020 to certify digital cameras that were officially approved to create IMAX-format films. Markoe explains, The program is a partnership between IMAX and the worlds leading filmmakers to meet their demands for the ultimate IMAX experience. Working directly with IMAX, the program allows filmmakers the ability to fully leverage the immersive IMAX theatrical experience, including expanded aspect ratio and higher resolution. Through the program, IMAX certifies best-in-class digital cameras from leading brands, including ARRI, Panavision, RED Digital Cinema and Sony, to provide filmmakers with the best guidance to optimize creatively how they shoot to best work in the IMAX format when paired with IMAXs proprietary post-production process.IMAX continues to innovate on the cutting edge of entertainment technology, Markoe states. Our IMAX with Laser system was recently recognized with a Scientific and Technical Academy Award. Combining our industry-leading technology and new tools with the enthusiastic embrace by filmmakers to specifically and creatively design their movies to be the most immersive, high-quality presentation, we continue to find new and innovative ways to expand the IMAX canvas moving forward. We see an opportunity for our platform to serve as a conduit for sports leagues to expand their global reach and provide a launchpad for projects from some of the worlds most iconic music acts. The future of cinema is multi-pronged, combining visionary works from Hollywood and local language blockbusters, original documentaries and exclusive events.The Wild Robot landed on IMAX in September. (Image courtesy of Universal Pictures)The E3LH QuarterView Dolby Vision Cinema Projection System. Dolby Vision is part of the companys Dolby Cinema package, which includes the Dolby Atmos sound system and special theater treatments to reduce ambient light and enhance comfort.(Image courtesy of Dolby)DOLBYThe appetite for premium cinema is huge, and its a clear factor in whats drawing people to see movies in theaters, says Jed Harmsen, Dolby Vice President and General Manager of Cinema & Group Entertainment. 2023 marked Dolby Cinemas strongestyear in history at the box office, with U.S. Dolby Cinema ticket sales eclipsing pre-pandemic levels, up 7% from 2019. Furthermore,Dolby boasts the highest average per-screen box office among all premium large-format offerings, which is a testament to theconsumers recognition and value of enjoying their films in Dolby.According to Comscore data, the domestic large-format gross box office was up +10.1% in 2023 vs. 2019, illustrating how the popularity of premium cinema is growing and overtaking pre-pandemic levels. Also, per Comscore, market share of the domestic large-format gross box office (in relation to the entire domestic gross box office) grew from 9.7% in 2019 to 13.3% in 2023. According to Harmsen, Premium cinema experiences are becoming a larger share of all cinema experiences. Its one of the reasons we continue to work with our cinema partners to make Dolby Vision and Dolby Atmos available to as many moviegoers around the world as possible. We created Dolby Cinema to be the best way for audiences to see a movie, featuring the awe-inspiring picture quality of Dolby Vision together with the immersive sound of Dolby Atmos, all in a fully Dolby-designed theater environment. There are around 275 Dolby Cinemas globally.Inside Out 2 hit IMAX giant screens in 2024. (Image courtesy of Pixar/Disney)Twisters was unleashed on IMAX in 2024. (Image courtesy of Universal Pictures)The IMAX 70mm Film Camera. One of the IMAX film cameras used by Christopher Nolan to shoot Oppenheimer. (Image courtesy of IMAX)Harmsen adds, Our global footprint for Dolby Cinema spans 28 exhibitor partners and 14 countries, with the first Dolby Cinema opening in 2014 in the Netherlands. Were excited to have true collaborations with multiple trusted partners and advocates like AMC, who have been a huge proponent in bringing the magic of the Dolby Cinema experience to moviegoers.Harmsen underscores the value of Dolby sound and vision to the viewing experience. Dolby Vision allows viewers to see subtle details and ultra-vivid colors with increased contrast ratio and blacker blacks, delivering the best version of the picture that the filmmaker intended. Dolby Atmos offers powerful, immersive audio, allowing audiences to feel a deeper connection to the story with sound that moves all around them. And Dolbys unique theater design allows audiences to experience both technologies in the best possible way by limiting ambient light, ensuring an optimal view from every premium seat and more.Dolby Vision and Dolby Atmos have revolutionized premium movie-going and have been embraced widely by creators and exhibitors, allowing us to bring Dolby-powered media and entertainment to more and more audiences, Harmsen says. To date, more than 600 theatrical features have been released or are confirmed to be released in Dolby Vision and Dolby Atmos, including recent box-office hits like Inside Out 2, Dune: Part Two, Deadpool & Wolverine and more. Harmsen concludes, We see exhibitors continuing to outfit their auditoriums to support more premium cinema experiences to meet the demand were seeing from moviegoers. At Dolby, we see premium as the new standard in cinema. Its clear audiences worldwide do as well.On the outskirts of the Las Vegas strip, Sphere is a literal expansion of cinema. (Image courtesy of Sphere Entertainment)The 4DX Cinema Sunshine Heiwajima movie theater in BIG FUN Heiwajima, an entertainment complex in Tokyo. (Image courtesy of 4DX)4DX4D cinema adds motion seats and multi-sensory effects to blockbuster movies. The experience adds about $8 to each ticket. South Koreas CJ 4DPLEX is the leader in this area. Globally, there are some 750 4DX screens affiliated with the company, which has teamed up with partners like Regal Cinemas. According to the Regal site, 4D movies utilize physical senses to transport viewers into a whole new viewing experience. Regals 4DX theaters are equipped with motion-enabled chairs, which create strong vibrations and sensations, as well as other environmental controls for simulated weather or other conditions such as lightning, rain, flashing (strobe) lights, fog and strong scents.SPHEREOn the outskirts of the Las Vegas strip, Sphere is a literal expansion of what cinema is. When not hosting concerts or events, Sphere shows high-resolution films on a wraparound screen that is 240 feet tall and covers 160,000 square feet, with a 16K by 16X resolution. Digital Domain worked on the visual effects of Darren Aronofskys movie Postcard from Earth, shown in the gigantic spherical venue. Working on Postcard from Earth for the Sphere was an extraordinary experience, marking our debut in such an impressive venue. Collaborating with Darren Aronofsky, a filmmaker whose work weve long admired, added an extra layer of excitement as we brought his vision to life in this dramatic setting, comments Matt Dougan, Digital Domain VFX Supervisor.Deadpool & Wolverine made a historic global IMAX debut last July. (Image courtesy of Walt Disney Studios Motion Pictures)With the Apple Vision Pro, stunning panorama photos shot on the iPhone expand and wrap around the user, creating the sensation that they are standing where the photo was taken. (Image courtesy of Apple Inc.)The IMAX Commercial Laser Projector designed specifically for multiplexes. (Image courtesy of IMAX)NETFLIX HOUSENetflix House is another next-generation cinema addition. The experiential entertainment venue will bring beloved Netflix titles to life, beginning with locations in malls in Dallas, Texas, and King of Prussia, Pennsylvania, in 2025. Building on previous Netflix live experiences for Bridgerton, Money Heist, Stranger Things, Squid Game and Netflix Bites, Netflix House will go one step further and create an unforgettable venue to explore your favorite Netflix stories and characters beyond the screen year-round, according to Henry Goldblatt, Netflix Executive Editor, on the Netflix website.At Netflix House, you can enjoy regularly updated immersive experiences, indulge in retail therapy and get a taste literally of your favorite Netflix series and films through unique food and drink offerings, says Marian Lee, Netflixs Chief Marketing Officer, on the Netflix site. Weve launched more than 50 experiences in 25 cities, and Netflix House represents the next generation of our distinctive offerings. The venues will bring our beloved stories to life in new, ever-changing and unexpected ways.ARAugmented reality is expected to expand the experience of movie-going, adding interactive and immersive elements to movie posters and trailers, interaction with characters, personalized narrative and cinematic installations. Apple Vision Pro undoubtedly brings new immersive opportunities to the table with its advanced mixed-reality capabilities offering unique ways to engage with stories, comments Rob Bredow, ILM Senior Vice President, Creative Innovation and Chief Creative Officer. He explains that cinema is a highly mature art form with well-established storytelling traditions and audience expectations. While Apple Vision Pro can be used to help create compelling new experiences, its not about replacing these mediums [such as film] but rather complementing them. The device opens doors to hybrid forms of entertainment that blend interactivity and immersion in ways that are uniquely suited to its technology. [Cinema] will continue to grow and thrive, enriched by these new possibilities, but certainly not overshadowed by them.Looking forward, one of the changes in next-generation cinema may be one of content. IMAXs Markoe says, Audiences are increasingly interested in must-see events things that cannot be experienced the same way at home on a TV. Recent events, such as our broadcast of the 2024 Paris Olympics Opening Ceremony or broadcasting the 2024 NBA Finals to select theaters in China, brings these larger-than-life experiences to audiences in a way that cant be replicated elsewhere. Increasingly, concert films like Taylor Swift: The Eras Tour have appealed to audiences who want to feel fully immersed in the action.Indeed, the future of cinema looks to lie more and more in premium cinema as well as immersive experiences that expand what movies are today.0 Comments ·0 Shares ·220 Views
-
OSCAR PREVIEW: NEXT-LEVEL VFX ELEVATES STORYTELLING TO NEW HEIGHTSwww.vfxvoice.comBy OLIVER WEBBDune: Part Two has significantly more action and effects than Dune: Part One, totaling 2,147 VFX shots. (Image courtesy of Warner Bros. Pictures)Godzilla Minus One made history at last years 96thAcademy Awards when it became the first Japanese film to be nominated and win an Oscar for Best Visual Effects, and the first film in the Godzilla franchises 70-year history to be nominated for an Oscar. Will the 97th Academy Awards produce more VFX Oscar history? Certainly, VFX will again take center stage, with a number of pedigree franchises and dazzling sequels hitting movie screens in the past year. From collapsing dunes to vast wastelands, battling primates and America at war with itself, visual effects played a leading role in making 2024 a memorable, mesmerizing year for global audiences.Dune: Part One won six Academy Awards in 2022, including Best Achievement in Visual Effects, marking Visual Effects Supervisor Paul Lamberts third Oscar. Released in March, Dune: Part Two is an outstanding sequel and has significantly more action and effects than the first installment, totaling a staggering 2,147 visual effects shots. The film is a strong contender at this years Awards. It was all the same people from Part One, so our familiarity with Deniss [Villeneuve] vision and his direction allowed us to push the boundaries of visual storytelling even further, Lambert says.The production spent a lot more time in the desert on Dune Two than on Dune One. Cranes were brought in and production built roads into the deep deserts of Jordan and Abu Dhabi. Concrete slabs were also built under the sand so that the team could hold cranes in place for the big action sequences. A lot of meticulous planning was done by Cinematographer Greig Fraser to work out where the sun was going to be relative to particular dunes, Lambert explains.Editorial and postvis collaborated with the VFX team to create a truly unique George Miller action sequence for Furiosa: A Mad Max Saga. (Image courtesy of Warner Bros. Pictures)We had an interactive view in the desert via an iPad that gave us a virtual view of these enormous machines at any time of day. This allowed us, for example, to figure out the shadows for the characters running underneath the spice crawler legs and the main body of the machine. VFX was then able to extend the CG out realistically, making it all fit in the same environment. Dune: Part One was a collaborative experience, but Dune: Part Two was even more so as we went for a much bigger scale with lots more action.The first topic discussed during pre-production among department heads and Villeneuve were the worm-riding scenes. Villeneuve envisaged Paul Atreides mounting the worm from a collapsing dune an idea that immediately struck the team as visually stunning and unique. The challenge lay in making this concept and the rest of the worm-riding appear believable. Filming for the worm sequences took place in both Budapest and the UAE. A dedicated worm unit was established in Budapest for the months-long shoot. The art department built a section of the worm on an SFX gimbal surrounded by a massive 270-degree sand-colored cone. This setup allowed the sun to bounce sand-colored light onto the actors and stunt riders who were constantly blasted with dust and sand, Lambert describes. Shooting only occurred on sunny days to maintain the desert atmosphere. Most of the actual worm-riding shots were captured here, except for the widest shots, which were later augmented with CG. In post-production, the sand-colored cone was replaced with extended, sped-up, low and high-flying helicopter footage of the desert.The VFX team at Framestore delivered 420 shots for Deadpool & Wolverine, while Framestores pre-production services (FPS) delivered 900-plus shots spanning previs, techvis and postvis. (Image courtesy of Marvel Studios)Wt FX delivered 1,521 VFX shots for Kingdom of the Planet of the Apes. Remarkably, there are only 38 non-VFX shots in the film. (Image courtesy of Walt Disney Studios Motion Pictures)Earlier footage from Gladiator was blended into Gladiator II flashbacks and live-action, especially original Gladiator crowd footage and in the arenas. The Colosseum set for Gladiator II was detailed as closely as possible to the first film. (Photo: Aidan Monaghan. Courtesy of Paramount Pictures)Blowing up the Lincoln Memorial for Civil War was shot in a parking lot in Atlanta. The single-story set was extended with VFX and the explosion grounded in real footage. Soldiers fired at a bluescreen with a giant hole in the middle. (Image courtesy of A24)For the collapsing dune scene, an area was scouted in the desert, and then a 10-foot-high proxy dune crest was created on flat desert.Three concrete tubes attached to industrial tractors were buried in this proxy dune and were used to create the collapsing effect while a stunt performer, secured by a safety line, ran across and descended into the collapsing sand as the tubes were pulled out. We could only attempt this once a day because of the need to match the light to the real dune, and the re-set to rebuild the crest took a few hours. On the fourth day, Denis had the shot he wanted. Post-production work extended the dunes apparent height to match the real dune landscape. The sequence was completed with extensive CG sand simulations of the worm moving through dunes, all contributing to the believability of this extraordinary scene.Mad Max: Fury Road was nominated for Best Visual Effects at the 2016 Academy Awards. Spin-off prequel/origin story Furiosa: A Mad Max Saga, the fifth installment in the Mad Max franchise, is the first of the films not to focus on the eponymous Max Rockatansky. DNEG completed 867 visual effects shots for the finished film. When DNEG came onboard with the project, main conversations were focused on the scope of the film and the variety of terrains and environments. Furiosa covers much more of the Wasteland than Fury Road did and details a lot of places that had only been touched on previously, notes DNEG VFX Supervisor Dan Bethell. It was really important that each environment have its own look, so as we travel through the Wasteland with these characters, the look is constantly changing and unique; in effect, each environment is its own character.Twisters features six tornadoes for which ILM built 10 models. (Images courtesy of Universal Pictures)Twisters features six tornadoes for which ILM built 10 models. (Images courtesy of Universal Pictures)The Stowaway sequence was particularly challenging for the visual effects team to complete. Apart from being 240 shots long and lasting 16 minutes, it had a lot of complex moving parts; vehicles that drive, vehicles that fly, dozens of digi-doubles, plenty of explosions and, of course, the Octoboss Kite! says Bethell. Underneath it all, a lot of effort also went into the overall crafting of the sequence, with editorial and postvis collaborating with our VFX team to create a truly unique George Miller action piece. The Bullet Farm Ambush was also a big challenge, although one of my favorites. Choreographing the action to flow from the gates of Bullet Farm down into the quarry as we follow Jack, then culminating with the destruction of, well, everything was very complex. We work often on individual shots, but to have over a hundred of them work together to create a seamless sequence is tough.Working on a George Miller project is always a unique experience for Bethell. Everything is story-driven, so the VFX has to be about serving the characters, their stories and the world they inhabit. Its also a collaboration; the use of VFX to support and enhance work from the other film departments such as stunts, SFX, action vehicles, etc. I enjoy that approach to our craft. Then, for me, its all about the variety and scope of the work. Its rare to get to work on a film with such a vast amount of fresh and interesting creative and technical challenges. On Furiosa, every day was something new, from insane environments and FX to the crazy vehicles of the Wasteland this movie had it all!Robert Zemeckis Here follows multiple generations of couples and families that have inhabited the same home for over a century. The movie required de-aging Tom Hanks and Robin Wright. Nearly the entire movie was touched by VFX in some form or another. (Images courtesy of TriStar Pictures/Sony)Robert Zemeckis Here follows multiple generations of couples and families that have inhabited the same home for over a century. The movie required de-aging Tom Hanks and Robin Wright. Nearly the entire movie was touched by VFX in some form or another. (Images courtesy of TriStar Pictures/Sony)Alex Garlands Civil War required over 1,000 visual effects shots as Garland pushed the importance of realism. The more grounded and believable we could make Civil War, the scarier it would be, notes Production VFX Supervisor David Simpson. We deliberately avoided Hollywood conventions and set a rule that all inspiration should be sourced from the real world. Every element VFX brought to the film had a real-world reference attached to it drawing from documentaries, news footage, ammunition tests and war photography.Due to the strict rules about shooting from the skies above Washington D.C., capturing the aerial shots of the Capitol would have been impossible to do for real. This resulted in full CG aerial angles over D.C. and the visual effects team building their own digital version, which covered 13 square miles and 75 distinct landmarks, thousands of trees, buildings, lampposts and a fully functioning system of traffic lights spread over 800 miles of roads. Plus, there are roadworks, buildings covered in scaffolding, parked cars, tennis courts and golf courses, Simpson adds. One of my favorite touches is that our city has cranes because all major cities are constantly under construction!The visual effects team went even further, building a procedural system to populate the inside of offices. When the camera sees inside a building, you can make out desks, computers, potted plants, emergency exit signs, water coolers. The buildings even have different ceiling-tile configurations and lightbulbs with slight tint variations. We literally built inside and out! Once the city was complete, it was then turned into a war zone with mocap soldier skirmishes, tanks, police cars, explosions, gunfire, helicopters, debris, shattered windows and barricades.Here follows multiple generations of couples and families that have inhabited the same home for over a century. Three sequences in the film were particularly CG-dominant, the first being the neighborhood reveal, which was the last shot in the movie. It was challenging mainly because it was subject to several interpretations, compositions and lighting scenarios, and the build was vast, says DNEG VFX Supervisor John Gibson. The sequence surrounding the houses destruction was also incredibly complex due to the interdependence of multiple simulations and elements, which made making changes difficult and time-consuming.Godzilla x Kong: The New Empire was directed by Adam Wingard, who developed a distinctive and appealing visual style for the film. Compelling VFX work was completed by Wt, Scanline VFX, DNEG and Luma Pictures, among others. (Images courtesy of Warner Bros. Pictures and Legendary Entertainment. GODZILLA TM & Toho Co., Ltd.)Dune: Part Two was even more of a collaborative experience than Dune: Part One, on a bigger scale with more action. (Image courtesy of Warner Bros. Pictures)The biggest challenge was the grand montage, which required seamless transitions through various time periods and environments. The Jurassic Era beat was especially challenging in that we needed to flesh out a brand-new world that had real-time elements mixed with accelerated time elements, and they all had to be set up to transition smoothly into the superheated environment and maintain a consistent layout, Gibson details. By far the most challenging aspect of the grand montage was the tree and plant growth. As it would have been very difficult to modify existing plant growth systems to match our cornucopia of plant species using the existing software available for foliage animation and rendering, we had to develop a host of new techniques to achieve the realistic results we were after.Gibson lauds the collaborative spirit of the team. He cites their willingness to experiment, learn new techniques and support each other as instrumental in overcoming the challenges of the condensed production schedule. Boundaries between departments dissolved, folks seized work to which they thought they could contribute, there was little hesitation to bring in and learn new software or techniques, and we brainstormed together, constantly looking for better and better ways to get results. Thats what stood out to me: the cohesion within the team.Cassandra inserting her hand through Mr. Paradoxs head was one of the many challenging VFX shots required for Deadpool & Wolverine. (Image courtesy of Marvel Studios)Framestore VFX Supervisor Robert Allman praises Marvels collaborative approach to VFX on Deadpool & Wolverine, which he describes as a melting pot for filmmakers and artists. (Images courtesy of Marvel Studios)Framestore VFX Supervisor Robert Allman praises Marvels collaborative approach to VFX on Deadpool & Wolverine, which he describes as a melting pot for filmmakers and artists. (Images courtesy of Marvel Studios)I love Marvels collaborative approach to VFX things are often hectic at the end, but that is because stuff is still being figured out, largely because its complicated! In this melting pot, the filmmakers look to the artists for answers, so your ideas can end up in the film. For hard-working VFX artists, nothing is better than that.Robert Allman, VFX Supervisor,Deadpool & WolverineWt FX delivered 1,521 VFX shots for Kingdom of the Planet of the Apes. Remarkably, there are only 38 non-VFX shots in the film. VFX Supervisor Erik Winquist ran through a gauntlet of challenges, from a cast of 12 new high-res characters whose facial animation needed to support spoken dialogue, to a minute-long oner set in an FX extravaganza with 175 apes and 24 horses to choreograph, he notes. The scenes that Id say were the most challenging were those that featured large water simulations integrating with on-set practical water, digital apes and a human actor. The bar for reality was incredibly high, not only for the water itself but also in having to sell that waters interaction with hairy apes, often in close-ups. It was an incredibly satisfying creative partnership for me and the whole team, working with [director] Wes Ball. From the start, he had a clear vision of what we were trying to achieve together and the challenge was about executing that vision. It gave us unshifting goal posts that we could plan to, and we knew that we were in safe hands working on something special together. That knowledge created a great vibe among the crew.More shooting time was spent in the desert on Dune: Part Two than on Dune: Part One. Cranes were brought in and production built roads deep into the deserts of Jordan and Abu Dhabi, UAE. (Image courtesy of Warner Bros. Pictures)Strict rules about shooting from the skies above Washington, D.C. prevented capturing aerial shots of the Capitol for Civil War, which resulted in full CG aerial angles over D.C. and the VFX team building a digital version covering 13 square miles and 75 distinct landmarks. (Image courtesy of A24)Deadpool & Wolverine has grossed more than $1.264 billion at the box office, a staggering feat. The VFX team at Framestore delivered 420 shots, while Framestores pre-production services (FPS) delivered 900-plus shots spanning previs, techvis and postvis. Robert Allman served as Framestore VFX Supervisor on the film. I love Deadpool, so it was tremendously exciting to be involved in making one, he explains. However, more than this, I love Marvels collaborative approach to VFX things are often hectic at the end, but that is because stuff is still being figured out, largely because its complicated! In this melting pot, the filmmakers look to the artists for answers, so your ideas really can end up in the film. For hard-working VFX artists, nothing is better than that.The atomizing of Cassandra in the final sequence was technically tough to achieve. Making a completely convincing digital human and the atomizing effects as detailed and dynamic as the shots demanded was a huge challenge. Most problematic was creating an effect within the borders of good taste when the brief disintegrate the face and body of a human seems to call for gory and horrifying. Many takes of this now lie on the digital cutting-room floor. An early wrong turn was to reference sandblasted meat and fruit, for which there are a surprisingly large number of videos on YouTube. However, this real-world physics gave rise to some stomach-churning simulations for which there was little appetite among filmmakers and artists alike. In the end, the added element of searingly hot, glowing embers sufficiently covered the more visceral elements of the gore to make the whole thing, while still violent, more palatable to all concerned.Traveling through the Wasteland with the characters of Furiosa: A Mad Max Saga, the look is constantly changing and unique. Each environment had to have its own look and, in effect, became its own character. (Images courtesy of Warner Bros. Pictures)Traveling through the Wasteland with the characters of Furiosa: A Mad Max Saga, the look is constantly changing and unique. Each environment had to have its own look and, in effect, became its own character. (Images courtesy of Warner Bros. Pictures)Ridley Scotts Gladiator was met with critical acclaim upon its release in 2000. It won five awards at the 73rdAcademy Awards, including Best Visual Effects. Nearly 25 years later, Gladiator II hits screens as one of the most anticipated releases of the year. Last year, Scotts highly anticipated Napoleon was also nominated for Best Visual Effects, and Scotts films are, more often than not, strong contenders at the Awards.Work for Gladiator II was split between Industrial Light & Magic, Framestore, Ombrium, Screen Scene, Exceptional Minds and Cheap Shot, with 1,154 visual effects shots required for the film. For Visual Effects Supervisor Mark Bakowski, the baboon fight sequence was particularly daunting. Conceptually, this was a tough one, he explains. Very early on, Ridley saw a picture of a hairless baboon with alopecia. It looked amazing and terrifying but also somewhat unnatural. Most people know what a baboon looks like, but a baboon with alopecia looks a bit like a dog. Framestore did a great job and built a baboon that looked and moved just like the reference, but viewed from certain angles and in action, unfortunately, it didnt immediately sell baboon. Its one thing to seeone in a nature documentary, but to have one in an action sequence with no introduction or explanation was a visual challenge.One of the biggest challenges facing the VFX team on Kingdom of the Planet of the Apes was the cast of 12 new high-res characters whose facial animation needed to support spoken dialogue. (Images courtesy of Walt Disney Studios Motion Pictures)One of the biggest challenges facing the VFX team on Kingdom of the Planet of the Apes was the cast of 12 new high-res characters whose facial animation needed to support spoken dialogue. (Images courtesy of Walt Disney Studios Motion Pictures)Bakowski explains that working with Ridley Scott was a crazy and unique experience. So many cameras and such scale, its a real circus and Ridleys very entertaining. He talks to everyone on Channel 1 on the radio, so you can follow along with his thought process, which is by turns educational, inspirational and hilarious. A lovely man. I enjoyed working with him. The VFX team was all fantastic and so capable both on our production side and vendor side. Ive never worked with such an amazing bunch on both sides. Our production team was a well-oiled machine sometimes in both senses but mainly in terms of efficiency and, vendor side, its great just being served up these beautiful images by such talented people. Both made my job so much easier. The locations were stunning, both scouting and shooting 99% of the film was shot in Maltaand Morocco, so youre there for a long time; you get to immerse yourself in it. That was multiplied by the fact we got impacted by the strikes, so we ended up going back to Malta multiple times. I felt I got to know the island quite well and loved it and the people. That said, I wont be going back to Malta or Morocco for a holiday soon. I feel like Ive had my fill for a while!Other outstanding releases that could potentially compete for Best Visual Effects include Twisters, which took everyone by storm earlier in 2024 (with ILM as the main vendor), Godzilla x Kong: The New Empire featuring compelling work by Wt, Scanline VFX, DNEG and Luma Pictures, among others, and A Quiet Place: Day One, a fresh, frightening addition to the Quiet Place series.0 Comments ·0 Shares ·233 Views
More Stories