fxguide | fxphd
fxguide | fxphd
fxphd.com is the leader in pro online training for vfx, motion graphics, and production. Turn to fxguide for vfx news and fxguidetv & audio podcasts.
  • 2 people like this
  • 16 Posts
  • 2 Photos
  • 0 Videos
  • 0 Reviews
  • News
Search
Recent Updates
  • WWW.FXGUIDE.COM
    VFXShow 289: HERE
    This week, the team reviews the film HERE by director Robert Zemeckis. Earlier, we spoke to visual effects supervisor Kevin Baillie for the FXpodcast. In that earlier FXPodcast, Kevin discussed the innovative approaches used on set and the work of Metaphysics on de-aging. Starring Tom Hanks, Robin Wright, Paul Bettany, and Kelly Reilly, Here is a poignant exploration of love, loss, and the passage of time.The filmmaking techniques behind this film are undeniably groundbreaking, but on this weeks episode of The VFX Show, the panel finds itself deeply divided over the narrative and plot. One of our hosts, in particular, holds a strikingly strong opinion, sparking a lively debate that sets this discussion apart from most of our other shows. Few films have polarized the panel quite like this one. Dont miss the spirited conversation on podcast.Please note: This podcast was recorded before the interview with Kevin Ballie (fxpodcast).The Suburban Dads this week are:Matt Wallin * @mattwallin www.mattwallin.com.Follow Matt on Mastodon: @[emailprotected]Jason Diamond @jasondiamond www.thediamondbros.comMike Seymour @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments 0 Shares 2 Views
  • WWW.FXGUIDE.COM
    Generative AI in media and entertainment
    SimulonIn this new Field Guide to Generative AI, fxguides Mike Seymour, working with NVIDIA, unpacks the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The field guide draws on interviews with experts atPlus, expertise from visual effects researchers at Wt FX & Pixar.This comprehensive guide is a valuable resource for creatives, technologists, and producers looking to harness the transformative power of AI in a respectful and appropriate fashion.Generative AI in Media and Entertainment, a New Creative Era: Field GuideClick here to download the field guide (free).Generative AI has become one of the most transformative technologies in media and entertainment, offering tools that dont merely enhance workflows but fundamentally change how creative professionals approach their craft. This class of AI, capable of creating entirely new content from images and videos to scripts and 3D assetsrepresents a paradigm shift in storytelling and production.As the field guide notes, this revolution stems from the nexus of new machine learning approaches, foundational models, and advanced NVIDIA accelerated computing, all combined with impressive advances in neural networks and data science.NVIDIAFrom enhancement AI to creation GenAIWhile traditional AI, such as Pixars machine learning denoiser in RenderMan, has been used to optimize production pipelines, generative AI takes a step further by creating original outputs. Dylan Sisson of Pixar notes that their denoiser has transformed our entire production pipeline and was first used on Toy Story 4, touching every pixel you see in our films.However, generative AIs ability to infer new results from vast data sets opens doors to new innovations, building and expanding peoples empathy and skills. Naturally it also has raised concerns, about artists rights, providence of training data and possible job losses as production pipelines incorporate this new technology. The challenge is to ethically incorporate these new technologies and the field guide aims to show companies that have been doing just that.RunwayBreakthrough applicationsGenerative models, including GANs (Generative Adversarial Networks), diffusion-based approaches, and transformers, underpin these advancements in generative AI. These technologies are not well understood by many producers and clients, yet companies that dont explore how to use them could well be at an enormous disadvantage.Generative AI tools like Runway Gen-3 are redefining how cinematic videos are created, offering functionalities such as text-to-video and image-to-video generation with advanced camera controls. From the beginning, we built Gen-3 with the idea of embedding knowledge of those words in the way the model was trained, explains Cristbal Valenzuela, CEO of Runway. This allows directors and artists to guide outputs with industry-specific terms like 50mm lens or tracking shot.Similarly, Adobe Firefly integrates generative AI across its ecosystem, allowing users to tell Photoshop what they want and having it comply through generative fill capabilities. Fireflys ethical training practices ensure that it only uses datasets that are licensed or within legal frameworks, guaranteeing safety for commercial use.New companies like Simulon are also leveraging generative AI to streamline 3D integration and visual effects workflows. According to Simulon co-founder Divesh Naidoo, Were solving a fragmented, multi-skill/multi-tool workflow that is currently very painful, with a steep learning curve, and streamlining it into one cohesive experience. By reducing hours of work to minutes, Simulon allows for rapid integration of CGI into handheld mobile footage, enhancing creative agility for smaller teams.BriaEthical frameworks and creative controlThe rapid adoption of generative AI has raised critical concerns around ethics, intellectual property, and creative control. The industry has made strides in addressing these issues. Adobe Firefly and Getty Images stand out for their transparent practices. Rather than ask if one has the rights to use a GenAI image, the better question is, can I use these images commercially, and what level of legal protection are you offering me if I do? asks Gettys Grant Frarhall. Getty provides full legal indemnification for its customers, ensuring ethical use of its proprietary training sets.Synthesia, which creates AI-driven video presenters, has similarly embedded an ethical AI framework into its operations, adhering to the ISO Standard 42001. Co-founder Alexandru Voica emphasizes, We use generative AI to create these avatars the diffusion model adjusts the avatars performance, the facial movements, the lip sync, and eyebrowseverything to do with the face muscles. This balance of automation and user control ensures that artists remain at the center of the creative process.Wonder StudiosTraining data and provenanceThe quality and source of training data remain pivotal. As noted in the field guide, It can sometimes be wrongly assumed that in every instance, more data is goodany data, just more of it. Actually, there is a real skill in curating training data. Companies like NVIDIA and Adobe use carefully curated datasets to mitigate bias and ensure accurate results. For instance, NVIDIAs Omniverse Replicator generates synthetic data to simulate real-world environments, offering physically accurate 3D objects with accurate physical properties for training AI systems, and it fully trained appropriately.This attention to data provenance extends to protecting artists rights. Getty Images compensates contributors whose work is included in training sets, ensuring ethical collaboration between creators and AI developers.BriaExpanding possibilitiesGenerative AI is not a one-button-press solution but a dynamic toolset that empowers artists to innovate while retaining creative control. As highlighted in the guide, Empathy cannot be replaced; knowing and understanding the zeitgeist or navigating the subtle cultural and social dynamics of our times cannot be gathered from just training data. These things come from people.However, when used responsibly, generative AI accelerates production timelines, democratizes access to high-quality tools, and inspires new artistic directions. Tools like Wonder Studio automate animation workflows while preserving user control, and platforms like Shutterstocks 3D asset generators provide adaptive, ethically trained models for creative professionals.Adobe FireflyThe future of generative AIThe industry is just beginning to explore the full potential of generative AI. Companies like NVIDIA are leading the charge with solutions like the Avatar Cloud Engine (ACE), which integrates tools for real-time digital human generation. At the heart of ACE is a set of orchestrated NIM Microservices that work together, explains Simon Yuen, NVIDIAs Senior Director of Digital Human Technology. These tools enable the creation of lifelike avatars and interactive characters that can transform entertainment, education, and beyond.As generative AI continues to evolve, it offers immense promise for creators while raising essential questions about ethics and rights. With careful integration and a commitment to transparency, the technology has the potential to redefine the boundaries of creativity in media and entertainment.
    0 Comments 0 Shares 3 Views
  • WWW.FXGUIDE.COM
    A deep dive into the filmmaking of Here with Kevin Baillie
    The film Heretakes place in a single living room, with a static camera, but the film is anything but simple. The film remains faithful to the original graphic novel by Richard McGuire on which it is based, Tom Hanks and Robin Wright star in a tale of love, loss, and life, along with Paul Bettany and Kelly Reilly.Robert Zemeckis directing the filmRobert Zemeckis directed the film. The cinematography was by Don Burgess, and every shot in the film is a VFX shot. On the fxpodcast, VFX supervisor and second unit director Kevin Baillie discusses the complex challenges of filming, editing, and particularly de-aging the well-known cast members to play their characters throughout their adult lifespans.A monitor showing the identity detection that went into making sure that each actors younger real-time likeness was swapped onto them, and only them.DeAgingGiven the quantity and emotional nature of the performances, and the vast range of years involved, it would have been impossible to use traditional CGI methods and equally too hard to do with traditional makeup. The creative team decided that AI had just advanced enough to serve as a VFX tool, and its use was crucial to getting the film greenlit. Baillie invited Metaphycis to do a screen test for the project in 2022, recreating a young Tom Hanks, reminiscent of his appearance in Big, while maintaining the emotional integrity of his contemporary performance. A team of artists used custom neural networks to test Tom Hanks D-Age to His 20s. That gave the studio and our filmmaking team confidence that the film could be made. Interestingly, as Baillie discusses in the fxpodcast, body doubles were also tested but did not work nearly as well as the original actors.Tests of face swapping by Metaphysics. Early test of methods for de-aging Tom based on various training datasets:https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_ageEvolutionOptions.mp4Neural render output test clip:https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_WIP.mp4Final comp test clip: (The result of test for de-aging Tom that helped green-light the film)https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_Final.mp4While the neural network models used for the outputs generated remarkable photoreal results, but they still required skilled compositing to match, especially on dramatic head turns. Metaphysic artists enhanced to AI to hold up to the films cinematic 4K standards. Metaphysics also developed new tools for actor eyeline control and other key crafting techniques. Additionally, multiple models were trained for each actor to meet the diverse needs of the film; Hanks is portrayed at five different ages, Wright at four ages, and Bettany and Riley at two ages each. Achieving this through traditional computer graphics techniques involving 3D modeling, rendering, and facial capture, would have been impossible given the scale and quality required forHere and the budget for so much on-screen VFX. The film has over 53 minutes of complete face replacement work, done primarily by Metaphysics, led by Metaphysics VFX Supervisor, Jo Plaete. Metaphysics proprietary process involves training a neural network model on a reference input, in this case, footage and images of a younger Hanks, with artist refinement of the results until the model is ready for production. From there, an actor or performer can drive the model, both live on set and in a higher quality version in Post. The results are exceptional and well beyond what traditional approaches have achieved.On Set live preview: Tom de-aged as visualized LIVE on set (right image) vs the raw camera feed (left image)For principal photography, the team needed a way to ensure that the age of the actors body motion matched the scripted age of their on-screen characters. To help solve this, the team deployed a real-time face-swapping pipeline in parallel on set, one monitor showing the raw camera feed and the other the actors visualized in their 20s (with about a six-frame delay). These visuals acted as a tool for the director and the actors to craft performances. As you can hear in the podcast it also allowed a lot more collaboration with other departments such as hair and makeup, and costume.The final result was a mix of multiple AI neural renders and classic nuke compositing. The result is a progression of the actors through their years, designed to be invisible to audiences.Robin with old-age makeup, compared with synthesized images of her at her older age, which were used to improve the makeup using similar methods to the de-aging done in the rest of the filmIn addition to de-aging, similar approaches were used to improve the elaborate old-age prosthetics worn by Robin Wright at the end of the film. , This allowed enhanced skin translucency and fine wrinkles, etc. De-aging makeup is extremely difficult and often characterised as the hardest special effects makeup to attempt. Metaphysics has done an exceptional job combining actual makeup with digital makeup to produce a photorealistic de-aging.In addition to the visuals, Respeecher and Skywalker Sound also deaged the actors voices, as Baillie discusses in the fxpodcast.Three setsThe filming was done primarily on three sets. There were two identical copies of the room to allow one to be filmed while the other was being dressed for the correct era. Additionally, exterior scenes from before the house was built were filmed on a separate third soundstage.Graphic PanelsGraphic panels serve as a bridge across millions of years from one notionally static perspective. The graphic panels that transitioned between eras were deceivingly tricky, with multiple scenes often playing on-screen simultaneously. As Baillie explains on the podcast, they had to reinvent editorial count sheets and use a special in-house comp team with After Effects as part of the editorial process. LED WallAn LED wall with content from the Unreal Engine was used outside the primary window. As some background needed to be replaced, the team also used the static camera to shoot helpful motion-control style matte passes (the disco passes).The Disco passesFor the imagery in the background Baille knew that it would take a huge amount of effort to add the fine detail needed in the Unreal Engine. He liked the UE output but we wanted a lot of fine detail for the 4K master. Once the environment artists had made their key creative choices, one of the boutique studios and the small in-house team used an AI power tool called Magnific to up-res the images. Magnific was was built by Javi Lopez (@javilopen) and Emilio Nicolas (@emailnicolas), two indie entrepreneurs, and it uses AI to infer additional detail. The advanced AI upscaler & enhancer effectively reimagine much of the details in the image, guided by a prompt and parameters.Before Left After RightMagnific allowed for an immense amount of high-frequency detail that would have been very time-consuming to add traditionally.Here, it has not done exceptionally well at the box office, ( and as you will hear in the next fxguide VFXShow podcast, not everyone liked the film), but there is no doubt that the craft of filmmaking and the technological advances are dramatic. Regardless of any plot criticisms, the film stands as a testament to technical excellence and innovation in the field. Notably, the production respected data provenance in its use of AI. Rather than replacing VFX artists, AI was used to complement their skills, empowering an on-set and post-production team to bring the directors vision to life. While advances in AI can be concerning, in the hands of dedicated filmmakers, these tools offer new dimensions in storytelling, expanding whats creatively possible.
    0 Comments 0 Shares 5 Views
  • WWW.FXGUIDE.COM
    Agatha All Along with Digital Domain
    Agatha All Along, helmed by Jac Schaeffer, continues Marvel Studios venture into episodic television, this time delving deeper into the mystique of Agatha Harkness, a fan-favourite character portrayed by Kathryn Hahn. This highly anticipated Disney+ miniseries, serving as a direct spin-off from WandaVision (2021), is Marvels eleventh television series within the MCU and expands the story of magic and intrigue that WandaVision introduced.Filming took place in early 2023 at Trilith Studios in Atlanta and on location in Los Angeles, marking a return for many of the original cast and crew from WandaVision. The production drew on its predecessors visual style but expanded with a rich, nuanced aesthetic that emphasises the eerie allure of Agathas character. By May 2024, Marvel announced the official title, Agatha All Along, a nod to the beloved song from WandaVision that highlighted Agathas mischievous involvement in the original series. The cast features an ensemble, including Joe Locke, Debra Jo Rupp, Aubrey Plaza, Sasheer Zamata, Ali Ahn, Okwui Okpokwasili, and Patti LuPone, all of whom bring fresh energy to Agathas world. Schaeffers dual role as showrunner and lead director allows for a cohesive vision that builds on the MCUs expanding exploration of side characters. After Loki, Agatha All Along has been one of the more successful spin-offs, with the audience number actually growing during the season as the story progressed. Agatha All Along stands out for its dedication to character-driven narratives, enhanced by its impressive technical VFX work and the unique blend of visuals.Agatha All Along picks up three years after the dramatic events of WandaVision, with Agatha Harkness breaking free from the hex that imprisoned her in Westview, New Jersey. Devoid of her formidable powers, Agatha finds an unlikely ally in a rebellious goth teen who seeks to conquer the legendary Witches Road, a series of mystical trials said to challenge even the most powerful sorcerers. This new miniseries is a mix of dark fantasy and supernatural adventure. It reintroduces Agatha as she grapples with the challenge of surviving without her magic. Together with her young protg, Agatha begins to build a new coven, uniting a diverse group of young witches, each with distinct backgrounds and latent abilities. Their quest to overcome the Witches Roads formidable obstacles becomes not only a journey of survival but one of rediscovering ancient magic, which, in turn, requires some old-school VFX.When approaching the visual effects in Agatha All Along, the team at Digital Domain once again highlighted their long history of VFX, adapting to the unique, old-school requirements set forth by production. Under the creative guidance of VFX Supervisor Michael Melchiorre and Production VFX Supervisor Kelly Port, the series visuals present a compelling marriage between nostalgia and cutting-edge VFX. Whats remarkable is the productions call for a 2D compositing approach that evokes the style of classic films. The decision to use traditional compositing not only serves to ground the effects but also gives the entire series a unique texturea rare departure in a modern era dominated by fully rendered 3D environments. Each beam of magic, carefully crafted with tesla coil footage and practical elements in Nuke, gives the witches their distinctive looks while adding a sense of raw, visceral energy. For the broom chase, Digital Domain took inspiration from the high-speed speeder-bike scenes in Return of the Jedi. Working from extensive previs by Matt McClurgs team, the artists skillfully blended real set captures with digital extensions to maintain the illusion of depth and motion. The compositors meticulous worklayering up to ten plates per shotensured each broom-riding witch interacted correctly with the environment. The ambitious sequence, demonstrating technical finesse and a dedication to immersive storytelling. In the death and ghost sequences, Digital Domain took on some of the series most challenging moments. From Agathas decaying body to her rebirth as a spectral entity, these scenes required a balance of CG and 2D compositing that maintained Kathryn Hahns performance nuances while delivering a haunting aesthetic. Drawing from 80s inspirations like Ghostbusters, compositors carefully retimed elements of Hahns costume and hair, slowing them to achieve the ethereal look mandated by the production.As Agatha All Along unfolds, the visuals reveal not only Digital Domains adaptability but a nod to the history of visual effects, an homage to both their own legacy and classic cinema. By tackling the limitations of a stripped-down toolkit with ingenuity, Digital Domain enriched the story with a fresh yet nostalgically layered visuals. Agatha All Along stands out for its blend of good storytelling and layered character development. Each trial on the Witches Road reveals more about Agatha and her evolving bond with her young ally, adding new depth to her character and expanding the lore of the MCU. Fans of WandaVision will find much to love here, as Agathas story unfolds with complex VFX and a touch of wicked humor.
    0 Comments 0 Shares 12 Views
  • WWW.FXGUIDE.COM
    VFXShow 288: The Penguin
    This week, the team discusses the visual effects of the HBOs limited series The Penguin from The Batman by director Matt Reeves.The Penguin is a miniseries developed by Lauren LeFranc for HBO. It is based on the DC Comics character of the same name. The series is a spin-off from The Batman (2022) and explores Oz Cobbs rise to power in Gotham Citys criminal underworld immediately after the events of that film.Colin Farrell stars as Oz, reprising his role from The Batman. He is joined by Cristin Milioti, Rhenzy Feliz, Deirdre OConnell, Clancy Brown, Carmen Ejogo, and Michael Zegen. Join the team as they discuss the complex plot, effects and visual language of this highly successful mini-series.The Penguin premiered on HBO on September 19, 2024, with eight episodes. The series has received critical acclaim for its performances, writing, direction, tone, and production values.The VFX are made by: Accenture Song, Anibrain, FixFX, FrostFX, Lekker VFX and PixomondoThe Production VFX Supervisor was Johnny Han, who also served as 2nd Unit Director. Johnny Han is a twice Emmy nominated and Oscar shortlisted artist and supervisor.The Supervillains this week are:Matt Bane Wallin * @mattwallin www.mattwallin.com.Follow Matt on Mastodon: @[emailprotected]Jason Two Face Diamond @jasondiamond www.thediamondbros.comMike Mr Freeze Seymour @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments 0 Shares 3 Views
  • WWW.FXGUIDE.COM
    Adeles World Record LED Concert Experience
    Adeles recent concert residency, Adele in Munich, wasnt just a live performance; it was a groundbreaking display of technology and design. Held at the custom-built Adele Arena at Munich Messe, this ten-date residency captivated audiences with both the music and an unprecedented visual experience. We spoke to Emily Malone, Head of Live Events and Peter Kirkup, Innovation Director at Disguise, about how it was done.Malone and Kirkup explained how their respective teams collaborated closely with Adeles creative directors to ensure a seamless blend of visuals, music, and live performance.Malone explained the process: The aim was not to go out and set a world record it was to build an incredible experience that allowed Adeles fans to experience the concert in quite a unique way. We wanted to make the visuals feel as intimate and immersive as Adeles voice. To achieve this, the team used a combination of custom-engineered hardware and Disguises proprietary software, ensuring the visuals felt like an extension of Adeles performance rather than a distraction from it.The Adele Arena wasnt your typical concert venue. Purpose-built for Adeles residency, the arena included a massive outdoor stage setup designed to accommodate one of the worlds largest LED media walls. The towering display, which dominated the arenas backdrop, set a new benchmark for outdoor live visuals, allowing Adeles artistry to be amplified on a scale rarely seen in live music.Adeles recent Munich residency played host to more than 730,000 fans from all over the world, reportedly the highest turnout for any concert residency outside Las Vegas. We are proud to have played an essential role in making these concerts such an immersive, personal and unforgettable experience for Adeles fans, says Emily Malone, Head of Live Events at Disguise.Thanks to Disguise, Adele played to the crowd with a curved LED wall spanning 244 meters, approximately the length of two American football fields. The LED installation was covered with 4,625 square meters of ROE Carbon 5 Mark II (CB5 MKII) in both concave and convex configurations. As a result, it earned the new Guinness World Record title for the Largest Continuous Outdoor LED Screen. The lightweight and durable design of the CB5 MKII made the installation possible. At the same time, its 6000-nit brightness and efficient heat dissipation ensured brilliant, vibrant visuals throughout the whole duration of the outdoor performance.With over 20 years of experience powering live productions, Disguise technology has done an enormous variety of outdoor performances and concerts. For Adeles Munich residency, Disguises team implemented advanced weatherproofing measures and redundant power systems to ensure reliability. Using Disguises real-time rendering technology, the team was able to adapt and tweak visuals instantly, even during Adeles live performances, ensuring a truly immersive experience for the audience.Adele in Munichtook place over 10 nights in a bespoke, 80,000-capacity stadium. This major event called for an epic stage production. Having supported Adeles live shows before, Disguise helped create, sync, and display visuals on a 4,160-square-metre LED wall assembled to look like a strip of folding film.Kirkup was part of the early consultation for the project, especially regarding the feasibility of the project and the deliverability of the original idea for the vast LED screens. There was a lot of discussion about pixel pitch and fidelity, especially as there was an additional smaller screen right behind the central area where Adele would stand. The question was raised if this should be the same LED products as the vast main screen or something more dense; in the end, they landed on using the same LEDs for the best contiguous audience experience, he explained.The Munich residency was unique; there was no template for the team, but their technology scaled to the task. The actual implementation went incredibly smoothly, explains Malone. There was so much pre-production, every detail was thought about so much by all the collaborators on the project. As there was so little time to get the stage and LEDs built on site, it was all extensively pre-tested before the final shipping. It would be so hard to fault find on location- I mean, it took me 15 minutes just to walk to the other end of the LED wall, and lord forbid if you forgot your radio or that one cable and you had to walk back for anything!The 2-hour concerts generated 530 million euros for the city of Munich throughout the ten shows from the concert attendance, with each night having a stadium capacity of 80,000 fans. 8 x Disguise GX3 servers were used to drive the LED wall, and 18 x Disguise SDI VFC cards were required. There was a total pixel count of 37,425,856 being driven, split over 3 actors.Actor 1: Left 7748 x 1560Actor 2: Centre 2912 x 936 + scrolls and infill 5720 x 1568 + lift 3744 x 416Actor 3: Right 7748 x 1560Disguises Designer software was used to preview visuals before going on stage and to control them during the show, which was sequenced to timecode. Given the nature of the live event, there was a main and backup system for full 1:1 redundancy. The footage from Adele singing was shot 4K with Grass Valley cameras. With a live performance, there is a degree of unpredictability, with live shows, says Malone. There was a tight set list of songs which did not change from night to night, all triggered by timecode, but structures were built in so that Adele wanted to speak to the audience or do something special on any night, they could get her closeup face on screen very quickly. Additionally, there is a major requirement to be able to take over the screens at a moments notice for safety messages should something completely unexpected happen. In reality, the screens serve many functions; it is there so the audience can see the artist they are there for, but also it is their safety, for the venue, for the suppliers.This was the first time Adele has played mainland Europe since 2016, and the 36-year-old London singer signed off her final Munich show, warning fans, I will not see you for a long time. Adele, who last released the album 30 in 2021, is set to conclude her shows at The Colosseum at Caesars Palace this month and then is not expected to tour again soon. Both the Vegas and Munich residencies give the singer a high level of creative and logistical control compared with normal live touring a luxury option only available to entertainments biggest stars. Such residences also allow for the investment in such bespoke technical installations. The Munich LED stage would simply not be viable to take on a world tour, both due to its size and how it was crafted explicitly for the German location.Disguise is at the heart of this new era of visual outdoor experiences, where one powerful integrated system of software, hardware and services can help create the next dimension of real-time concert. They have partnered with the biggest entertainment brands and companies in the world such as Disney, Snapchat, Netflix, ESPN, U2 at the Sphere, the Burj Khalifa, and Beyonce.Thanks to the massive technical team; for Adeles fans, Adele in Munich was more than a concertit was an immersive experience, seamlessly blending state-of-the-art visuals with world-class music.
    0 Comments 0 Shares 3 Views
  • WWW.FXGUIDE.COM
    Slow Horses
    Slow Horses is an Emmy-nominated funny espionage drama that follows a team of British intelligence agents who serve in a dumping ground department of MI5 due to their career-ending mistakes, nicknamed Slow Horses (from Slough House). The team is led by their brilliant but cantankerous and notorious Jackson Lamb (Academy Award winner Gary Oldman). In Season 4, the team navigate the espionage worlds smoke and mirrors to defend River Cartwrights (Jack Lowden) father from sinister forces.Academy Award winner Gary Oldman as Jackson LambSeason 4ofSlow Horsespremiered on September 4, 2024 onApple TV+. It is also subtitled Spook Street after the fourth book of the same name. Union VFX handled the majority of the visual effects, and the VFX Supervisor was Tim Barter (Poor Things).In the new season, the key VFX sequences beyond clean-up work and stunt work included the London explosion, the Paris chteau fire, the explosion in the canal, and the destruction of the West Acres shopping mall. Union had done some work on season 3, and they were happy to take an even more prominent role in the new season. In season 4, they had approximately 190 shots and 11 assets. For season 3, they worked on approximately 200 shots and 20 assets but were not the lead VFX house.https://www.fxguide.com/wp-content/uploads/2024/10/Slow-Horses--Season-4Trailer.mp4Union VFX is an independent, BAFTA-winning, visual effects studio founded in 2008, based in Soho, with a sister company in Montral. Union has established an strong reputation for seamless invisible effects on a wide range of projects building strong creative relationships with really interesting directors including Danny Boyle, Susanne Bier, Martin McDonagh, Marjane Satrapi, Sam Mendes, Fernando Meirelles & Yorgos Lanthimos.The Union VFX team used a mixture of practical effects, digital compositing, and digital doubles/face replacement to achieve the desired VFX for the show. Interestingly, at one point, a hand grenade had to be tossed in a canal after it was placed in Rivers hoodie. Not only was the water added digitally for the normal VFX reasons that one might imagine, such as an explosion going off near the hero actors, but also water in the canal isnt actually fit to be splashed on anyone, it just isnt clean water, so the team did the water explosion fully digitally. Simiarly, the shopping mall at Westacres, which was meant to have 214 retail stores, 32 restaurants and 8 cinema screens was not actually blown up, but in fact the location wasnt even in London and the background was all done with digital matte paintings to look like a real Westfield Shopping Centre hence the fictional equivalents similar name.After the suicide bomb goes off at the Westacres shopping mall in London, carried out by Robert Winters. After Winters publishes a video confessing to the attack, a police force breaks into his flat, but three of the MI5 dogs are killed by a booby trap. This explosion was genuinely shot in a backlot, and then integrating that into the plate photography of a block of flats.The Park which is the hub of the MI5 operations has been seen before since season one but each season it is slightly different causing Tim Barter to analyse all the previous seasons work to try and build a conceptual model of what The Park building would actually look like, so that they could have continuity in season four with various interior, exterior and complex car park shots.In season four there was a requirement to do probably one night and five daytime aerial views of it from different angles, explains Tim Barter. We got to create a whole sections of the Park that have never been created before, so I was there going through the previous seasons, looking at all the peripheral live action shots that were all shot in very different actual locations. Its like, there is this section where River comes out of the underground car park, and then he gets into this little door over here, which then goes through here on the side of this. And all the time, Im trying to retroactively creating that architecture (digital) of the Park, to be faithful to the previous seasons.After the Harkness breaks into Mollys apartment and forces her to give him her security credentials, he tracks the Rivers convoy of Dogs and sends the assassin Patrice to intercept. After slamming a dump truck into the convoy, Patrice kills four Dogs and kidnaps River. This extensive sequence was shot in London at night in the financial district, but that part of London is still an area where a lot of people live. So there is no opportunity to have the sound of guns or blank muzzle flashes, Tim explains. It was all added in post. The dump truck that smashes into the SUV was also not able to be done in London. It was filmed at a separate location and then the aftermath was recreated by the art department in the real financial district for filming. The dump truck actually ramming the car was shot with green screens and black screens and lots of camera footage. We actually used much less than we shot, but we did use the footage to make up a series of plates so we could composite it successfully over a digital background.River Cartwright (Jack Lowden) a British MI5 agent assigned to Slough House.For the scene where jackson Lamb hits the assassin with a taxi they started with an interior garage as a clean plate and then had a statement on wires tumbling over it various angles and then married this together. Actually we ended up with marrying three Live action plates. We had the garage, the green screen plate of the stunt actor, but then we also got clean plates of the garage interior as we were removing and replacing certain things in the garage, Tim comments. We also had to do some instances of face replacement for that.Another instance of face replacement was the half brother of River who gets killed at the beginning of the series in Rivers fathers bathroom. Originally, this was a dummy in the bathtub but it looked a bit too obvious that it was fake, so an actor was cast and then the team re-projected the actors face onto the dummy in Nuke. Of course, there was also a lot of blood and gore in the final shot.Hugo Weaving as Frank Harkness, an American mercenary and former CIA agent.The film was shot at 4K resolution with exception of some drone footage which had to be stabilised and used for visual effects work so some instances the drone footage was 6K. This allowed the extra room to tighten up the shots, stabilise and match to any practical SFX or explosions.
    0 Comments 0 Shares 33 Views
  • WWW.FXGUIDE.COM
    Wonder Dynamics up the game with AI Wonder Animation
    One popular application of early GenAI was using style transfer to create a cartoon version of a photograph of a person. Snapchat also enjoyed success with Pixar-style filters that made a person seem to be an animated character, but these could be considered as effectively image processing.Runway has recently shown Act-One, a new tool that can be used to build an expressive and controllable tool for artists to generate expressive character performances using Gen-3 Alpha. Act-One can create cartoon animations using video and voice performances as inputs to generative models that can turn expressive live-action and animated content.Wonder Dynamics has escalated this to a new and interesting level with Wonder Animation, but outputing 3D not 2D content.Wonder Dynamics, an Autodesk company, has announced the beta launch of Wonder Studios newest feature: Wonder Animation, which is powered by a first-of-its-kind video-to-3D scene technology that enables artists to shoot a scene with any camera in any location and turn the sequence into an animated scene with CG characters in a 3D environment.Wonder AnimationThe original Wonder Studio Video-to-3D CharacterIn May, Autodesk announced that Wonder Dynamics, the makers of Wonder Studio, would become part of Autodesk. Wonder Studio first broke through as a browser-based platform that allowed people to use AI to replace a person in a clip with a computer-generated character. It effortlessly allowed users to replace a live-action actor with a Mocap version of the digital character. The results and effectiveness of the original multi-tool machine learning / AI approach were immediately apparent. From shading and lighting to animation and ease of use, Wonder Studio was highly successful and had an impact almost immediately.The most innovative part of the new Wonder Animations video-to-3D scene technology is its ability to assist artists while they film and edit sequences with multiple cuts and various shots (wide, medium, close-ups).Maya exportThe technology then uses AI to reconstruct the scene in a 3D space and matches the position and movement of each cameras relationship to the characters and environment. This essentially creates a virtual representation of an artists live-action scene containing all camera setups and character body and face animation in one 3D scene. Note, it does not convert the video background environment to specific 3D objects, but it allows the 3D artist to place the Wonder Dynamics 3D characters into a 3D environment where before, they were only placed back into the original video background.This is entirely different from a style transfer or an image processing approach. The output from Wonder Animations video-to-3D scene technology is a fully editable 3D animation. The output contains 3D animation, character, environment, lighting, and camera tracking data available to be loaded into the users preferred software, such as Maya, Blender, or Unreal.Even though there have been tremendous advancements in AI, there is a current misconception that AI is a one-click solutionbut thats not the case. The launch of Wonder Animation underscores the teams focus on bringing the artist one step closer to producing fully animated films while ensuring they retain creative control. Unlike the black-box approach of most current generative AI tools on the market, the Wonder Dynamics tools are designed to allow artists to actively shape and edit their vision instead of just relying on an automated output.Wonder Animations beta launch is now available to all Wonder Studio users. The team aims to bring artists closer to producing fully animated films. We formed Wonder Dynamics and developed Wonder Studio (our cloud-based 3D animation and VFX solution) out of our passion for storytelling coupled with our commitment to make VFX work accessible to more creators and filmmakers, comments co-founder Nikola Todorovic. Its been five months since we joined Autodesk, and the time spent has only reinforced that the foundational Wonder Dynamics vision aligns perfectly with Autodesks longstanding commitment to advancing the Media & Entertainment industry through innovation.Here is the official release video:
    0 Comments 0 Shares 30 Views
  • WWW.FXGUIDE.COM
    Cinesite for a more perfect (The) Union
    Netflixs The Union is the story of Mike, (Mark Wahlberg), a down-to-earth construction worker who is thrust into the world of superspies and secret agents when his high school sweetheart, Roxanne, (Halle Berry) recruits him for a high-stakes US intelligence mission.Mike undergoes vigorous training, normally lasting six months but condensed to under two weeks. He is given another identity, undergoes psychological tests, and is trained in hand-to-hand combat and sharpshooting before being sent on a complex mission with the highly skilled Roxanne. As one might expect, this soon goes south, and the team needs to save the Union, save the day, and save themselves.Julian Farino directed the film, Alan Stewart did the cinematography, and Max Dennison was Cinesites VFX supervisor. The film was shot on the Sony Venice using an AXS-R7 at 4K andPanavision T Series Anamorphic Lenses.Cinesites VFX reel:https://www.fxguide.com/wp-content/uploads/2024/10/Cinesite-The-Union-VFX-Breakdown-Reel.mp4During training Mike learns to trust Roxannes instructions in a dangerous running blind exersice. The precipitous drop makes the sequence really interesting and this was achieved by digital set removal so that the actors were not in mortal danger. Cinesites clean composting and colour correction allowed for the many car chase sequences and complex driving sequences in the film. This included a motorcycle chases, hazardous driving around London and a very complex three car chase.The three car chase with the BMW, Porsche and Ford was shot in Croatia and on this primarily used pod drivers or drivers sitting in a pod on top of the car doing the actual driving while the actors below simulating driving. We had to remove the pods, replace all the backgrounds, put new roofs on the cars replace the glass, and on some wider shots do face replacement when stunt drivers were actually driving the cars without pods, Max outlines. The team did not use AI or machine learning for the face replacements but rather, all the face replacements were based of cyber scans of the actual actors. This was influenced by the fact that in the vast majority of cases, the team didnt have to animate the actors faces, as the action sequences are cut so quickly and the faces are only partially visible through the cars windows. For for the motorbike chase above the motorbikes were driven went on location by stunt people whose faces were replaced with those of the lead actors. We had scans done of Mark and Halles faces so that we could do face replacement digitally through the visors of the helmets, explains Max Dennison. It is one of those films that liked to show London and all the sights of London, so we see Covent Garden, Seven Dials, Piccadilly Circus- a bit of a fun tourist trip, comments Max. Given the restrictions in being able to film on the streets of London the initial plan was to shoot in an LED volume. Apparently, the filmmakers explored this but preferred instead to shoot green screen and the results stack up very well. When Mike comes out of the Savoy hotel and drives on the wrong side of the road, all those exterior environments were replaced by us from array photography, he adds. Cinesite has a strong reputation for high end seamless compositing and invisible visual effects work, but in The Union the script allowed for some big action VFX sequences which are both exciting and great fun. The opening sequence of the suitcase exchange that goes wrong, the team was required to produce consistent volumetric effects as in the beginning of that sequence its raining and by the end it is not. Given that the shots were not shot in order, nor with the correct weather, the team had about 20 VFX shots to transition from mild rain to clearing skies through complex camera moves and environments around London.In addition to the more obvious big VFX work there was wire removal, set extension and cleanup work required for the action sequences. Shot in and around the correct crowded actual locations, there was still a need to use digital matte paintings, set extensions and apply digital cleanup for many of the exteriors. For the dramatic fall through the glass windows, stunt actors fell using wires and then the team not only replaced the wires but also did all of the deep background and 3D environments around the fall sequence. In the end the team also built 3-D class windows, as it was much easier to navigate the wire removal when they had control of the shattering glass. This was coupled with making sure that their clothes were not showing the harnesses or being pulled by the wires in ways that would give away how the shot was done.The film shot in New Jersey, (USA), London, (England), Slovenia, Croatia and Italy (street scenes). Principle photography was at Shepparton Studios, as the film was primarily London based. A lot of the stunt work was done in Croatia, at a studio set built for the film, especially for the roof top chase. Unlike some productions, the filmmakers sensibly used blue and green screen where appropriate allowing for the film to maximise the budget on screen and produce elaborate high octane action chase and action sequences. In total, Cinesite did around 400 shots. Much of this was Cinesite trademark invisible VFX work based on clever compositing, and very good eye matching of environments, lighting and camera focus/DOF.Given where the film finishes, perhaps this is the start of a Union franchise? Films such as The Union are fun engaging action films that have done very well for Netflix, often scoring large audiences even when not as serious or pretentious as the Oscar nominated type films that tend to gain the most publicity. And in the end this is great work for the VFX artists and post production crews.
    0 Comments 0 Shares 52 Views
  • WWW.FXGUIDE.COM
    fxpodcast #377: Virtually Rome For Those About To Die
    For Those About To Die is an epic historical drama series directed by Roland Emmerich. The director is known as the master of disaster. This was his first move into series television, being very well known for his Sci-Fi epics such as Independence Day, Godzilla, The Day After Tomorrow, White House Down, and Moonfall.The director, Roland Emmerich on the LED Volume (Photo by: Reiner Bajo/Peacock)Pete Travers was the VFX supervisor onFor Those About To Die. The team used extensive LED virtual production, with James Franklin as the virtual production supervisor. We sat down with Pete Travers and James Franklin to discuss the cutting-edge virtual production techniques that played a crucial role in the series completion.THOSE ABOUT TO DIE Episode 101 Pictured: (l-r) (Photo by: Reiner Bajo/Peacock)The team worked closely with DNEG as we discuss in this weeks fxpodcast. We discuss how virtual production techniques enhanced the efficiency and speed of the 1800 scenes which were done with virtual production and how this meant the production only needed 800 traditional VFX shots to bring ancient Rome to life and enabled the 80,000-seat Colosseum to be filled with just a few people.The LED volume stages were at Cinecitta Studios in Italy, with a revolving stage and the main backlot right outside the stage door. As you will hear in the fxpodcast, there were two LED volumes. The larger stage had a rotating floor, which allowed for different angles to be filmed of the same physical set (inside the volume), and the floor rotated, and so could the images on the LED walls.From Episode 101 (in camera)The actual LED set for that setup (Photo by: Reiner Bajo/Peacock)We discuss in the podcast how the animals responded to the illusion of space that an LED stage provides, how they managed scene changes not to upset the horses, and how one incident had the crew running down the street outside the stage chasing runaway animals!The shot in cameraBehind the scenes of the same shotThe team shot primarily on the Sony Venice 2, but the director is known for big wide-angle lens shots but trying to film an LED stage on a 14mm lens can create serious issues.From Episode 108 (final shot)Crew on set, in front of the LED wall of the Colosseum. Photo by: Reiner Bajo/PeacockThe team also produced fully digital 3D VFX scenes.
    0 Comments 0 Shares 86 Views
  • WWW.FXGUIDE.COM
    Q&A with DNEG on the environment work in Time Bandits
    Jelmer Boskma was the VFX Supervisor at DNEG on Time Bandits (AppleTV+). The show is a modern twist on Terry Gilliams classic 1981 film. The series about a ragtag group of thieves moving through time with their newest recruit, an eleven-year-old history nerd, was created by Jemaine Clement, Iain Morris, and Taika Waitit. It stars Lisa Kudrow as Penelope, Kal-El Tuck as Kevin, Tadhg Murphy as Alto, Roger Jean Nsengiyumva as Widgit, Rune Temte as Bittelig, Charlyne Yi as Judy, Rachel House as Fianna and Kiera Thompson as Saffron.In addition to the great environment work the company did, DNEG 360, a division of DNEG, which is a partnership with Dimension Studio, delivered virtual production services for Time Bandits. FXGUIDE: When did you start on the project?Jelmer Boskma: Post-production was already underway when I joined the project in March 2023, initially to aid with the overall creative direction for the sequences awarded to DNEG. FXGUIDE: How many shots did you do over the series?Jelmer Boskma: We delivered 1,094 shots, featured in 42 sequences throughout all 10 episodes. Our work primarily involved creating environments such as the Fortress of Darkness, Sky Citadel, Desert, and Mayan City. We also handled sequences featuring the Supreme Beings floating head, Pure Evils fountain and diorama effects, as well as Kevins bedroom escape and a number of smaller sequences and one-offs peppered throughout the season.FXGUIDE:? And how much did the art department map this out and how much were the locations down to your team to work out?Jelmer Boskma: We had a solid foundation from both the art department and a group of freelance artists working directly for the VFX department, providing us with detailed concept illustrations.The design language and palette of the Sky Citadel especially was resolved to a large extent. For us it was a matter of translating the essence of that key illustration into a three-dimensional space and designing several interesting establishing shots. Additional design exploration was only required on a finishing level, depicting the final form of the many structures within the citadel and the surface qualities of the materials from which the structures were made. The tone of the Fortress of Darkness environment required a little bit more exploration. A handful of concept paintings captured the scale, proportions and menacing qualities of the architecture, but were illustrated in a slightly looser fashion. We focused on distilling the essence of each of these concepts into one coherent environment. Besides the concept paintings we did receive reference in the form of a practical miniature model that was initially planned to be used in shot, but due to the aggressive shooting schedule could not be finished to the level where it would have worked convincingly. Nonetheless it served as a key piece of reference for us to help capture the intent and mood of the fortress.Other environments like the Mayan village, the besieged Caffa fortress, and Mansa Musas desert location were designed fully by our team in post-production. FXGUIDE: The Mayan village had a lot of greens and jungle was there much practical studio sets?Jelmer Boskma: We had a partial set with some foliage for the scenes taking place on ground level. The establishing shots of the city, palace and temple, as well as the surrounding jungle and chasm, were completely CG. We built as much as we could with 3D geometry to ensure consistency in our lighting, atmospheric perspective and dynamism in our shot design. The final details for the buildings as well as the background skies were painted and projected back on top of that 3D base. To enhance realism, the trees and other foliage were rendered as 3D assets allowing us to simulate movement in the wind. FXGUIDE: Were the actors filmed on green/blue screen?Jelmer Boskma: In many cases they were. For the sequences within Mansa Musas desert camp and the Neanderthal settlement, actors were shot against DNEG 360s LED virtual production screens, for which we provided real-time rendered content early on in production. To ensure that the final shots were as polished and immersive as possible, we revisited these virtual production backdrops in Unreal Engine back at DNEG in post. This additional work involved enhancing the textural detail within the environments and adding subtle depth cues to help sell the scale of the settings. Access to both the original Unreal scenes and the camera data was invaluable, allowing us to work directly with the original files and output updated real-time renders for compositing. While it required careful extraction of actors from the background footage shot on the day, this hybrid approach of virtual production and refinement in post ultimately led to a set of pretty convincing, completely synthetic, environments. FXGUIDE: Could you outline what the team did for the Fortress of Darkness?Jelmer Boskma: The Fortress of Darkness was a complex environment that required extensive 3D modelling and integration. We approached it as a multi-layered project, given its visibility from multiple angles throughout the series. The fortress included both wide establishing shots and detailed close-ups, particularly in the scenes during the seasons finale.For the exterior, we developed a highly detailed 3D model to capture the grandeur and foreboding nature of the fortress. This included creating intricate Gothic architectural elements and adding a decay effect to reflect the corrosive, hostile atmosphere surrounding the structure. The rivers of lava, which defy gravity and flow towards the throne room, were art directed to add a dynamic and sinister element to the environment and reinforce the power Pure Evil commands over his realm.Inside, we extended the practical set, designed by Production Designer Ra Vincent, to build out the throne room. This space features a dramatic mix of sharp obsidian and rough rock textures, which we expanded with a 3D background of Gothic ruins, steep cliffs, and towering stalactites. To ensure consistency and realism, we rendered these elements in 3D rather than relying on 2.5D matte paintings, allowing for the dynamic lighting effects like fireworks and lightning seen in episode 10. FXGUIDE: What was the project format was it 4k or 2k (HDR?) and what resolution was the project shot at primarily?Jelmer Boskma: The project was delivered in 4K HDR (3840 x 2160 UHD), which was also the native resolution at which the plates were photographed. To manage render times effectively and streamline our workflow, we primarily worked at half resolution for the majority of the project. This allowed us to focus on achieving the desired creative look without being slowed down by full-resolution rendering. Once the compositing was about 80% complete and creatively aligned with the vision of the filmmakers, we would switch to full-resolution rendering for the final stages.The HDR component of the final delivery was a new challenge for many of us and required a significant amount of additional scrutiny during our tech check process. HDR is incredibly unforgiving as it reveals any and all information held within each pixel on screen, whether its within the brightest overexposed areas or hiding inside the deepest blacks of the frame. FXGUIDE: Which renderer do you use for environment work now?Jelmer Boskma: For Time Bandits we were still working within our legacy pipeline, rendering primarily inside of Clarisse. We have since switched over to a Houdini centric pipeline where most of our rendering is done through Renderman.FXGUIDE: How completely did you have to make the sets for example, for Sky Citadel did you have a clear idea of the shooting angles needed and the composition of the shots, or did you need to build the environments without full knowledge of how it would be shot?Jelmer Boskma: I would say fairly complete, but all within reason. We designed the establishing shots as we were translating the concept illustrations into rough 3D layouts. Once we got a decent idea of the dimensions and scale of each environment, we would pitch a couple of shot ideas that we found interesting to feature the environment in. It would not have made sense to build these environments to the molecular level, as the schedule would not have allowed for that. In order to be as economical as possible, we set clear visual goals and ensured that we focussed our time only on what we are actually going to see on screen. Theres nuance there of course as we didnt want to paint ourselves into a corner, but with the demanding overall scope that Time Bandits had, and with so many full CG environment builds to be featured, myself and DNEGs producer Viktorija Ogureckaja had to make sure our time was well-balanced. FXGUIDE:Were there any particular challenges to the environment work?Jelmer Boskma: The most significant challenge was working without any real locations to anchor our environments. For environments like the Fortress of Darkness, Sky Citadel, Mayan City, and Caffa, we were dealing with almost entirely synthetic CG builds. For the latter two, we incorporated live-action foreground elements with our actors, but the core environments were fully digital.Creating a sense of believability in completely CG environments requires considerable effort. Unlike practical locations, which naturally have imperfections and variations, CG environments are inherently precise and clean, which can make them feel less grounded in reality. To counteract this, we needed to introduce significant detail, texture, and imperfections to make the environments look more photorealistic.Additionally, our goal was not just to create believable environments but also to ensure they were visually compelling. The production of these larger, establishing shots consumed a significant portion of our schedule, requiring careful attention to both the technical and aesthetic aspects of the work.The contributions made by all of the artists involved on this show was vital in achieving both these goals. Their creativity and attention to detail were crucial in transforming initial concepts into visual striking final shots. Reflecting on the project, its clear that the quality of these complex environments was achieved through the skill and dedication of our artists. Their efforts not only fulfilled the projects requirements but also greatly enhanced the visual depth and supported the storytelling, creating immersive settings that, I hope, have managed to captivate and engage the audience.
    0 Comments 0 Shares 58 Views
  • WWW.FXGUIDE.COM
    fxpodcast #378: Ray Tracing FTW, Chaos Project Arena short film we chat with the posse
    Chaos has releasedRay Tracing FTW, a short film that showcases its Project Arena virtual production toolset. Just before the film was released, we spoke with the director, writers, and producers, Chris Nichols, Daniel Thron, and Erick Schiele.As you will hear in this hilarious and yet really informative fxpodcast, the team worked with some of the most well known names in the industry and used virtual tech that allowed them to do up to 30 set-up shots during a standard 10-hour shoot day.In the film, the V-Ray environment of an Old West town was designed by Erick Schiele and built by The Scope with the help of KitBash3D and TurboSquid assets. The production used this environment for everything from all-CG establishing shots and tunnel sequences to the background for a physical train car set, which was able to be seen convincingly by using full ray tracing.The Director of Photography was Richard Crudo (Justified, American Pie), who shot nearly every shot in-camera, barring a massive VFX-driven train crash. The productions speed and flexibility were shown by the final 3D Hacienda, which the team bought online and got on-screen in under 15 minutes.Special thanks to Chris Nichols, director of special projects at the Chaos Innovation Lab and VFX supervisor/producer of Ray Tracing FTW.
    0 Comments 0 Shares 56 Views
  • WWW.FXGUIDE.COM
    VFXShow 287: Deadpool & Wolverine
    This week, the team discusses the VFX of the smash hitDeadpool & Wolverine,which is the 34th film in the Marvel Cinematic Universe (MCU) and the sequel toDeadpool(2016) andDeadpool 2 (2018). And not everyones reaction is what you might expect!Shawn Levy directed the filmfrom a screenplay he wrote with Ryan Reynolds, Rhett Reese, Paul Wernick, and Zeb Wells. Reynolds and Hugh Jackman star as Wade Wilson / Deadpool and Logan / Wolverine, alongside a host of cameos and fan-friendly references.Swen Gillberg was the production VFX Supervisor, and Lisa Marra was the Production VFX Producer. The VFX Companies that brought this comic/action world to life included Framestore, ILM, Wt FX, Base FX, Barnstorm VFX, Raynault VFX, and Rising Sun Pictures.Deadpool & Wolverine premiered on July 22, 2024. It has grossed over $1.3 billion worldwide so far, becoming the 22nd-highest-grossing film of all time, the highest-grossing R-rated film of all time, and the second-highest-grossing film of 2024.The DeadPool Multi-verse crew this week are:WallinPool * @mattwallin www.mattwallin.com. Follow Matt on Mastodon: @[emailprotected]DiamondPool @jasondiamond www.thediamondbros.comSeeMorePool @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments 0 Shares 50 Views
  • WWW.FXGUIDE.COM
    Googles NotebookLM: everyone has a podcast about AI, even AI itself
    NotebookLM is a personal AI research assistant powered by Googles LLM, Gemini 1.5 Pro. If you upload a document or PDF to NotebookLM, it not only can summarise the information as Llama3 or ChatGPT does, but it can also produce a podcast-like discussion between two people about the content. It is incredible. (See our tests below).I uploaded a 40-page, highly technical academic paper, and within a few minutes, there was an 11-minute discussion about it, with jokes, slang, and apparent interest in the topic. The voices sound natural and conversational. If you have a 20-minute commute, you could upload a couple of complex SIGGRAPH papers and listen to your own personal podcast about the papers on your way to work. It is a jaw-dropping next step in conversational agents and LLMs.Will it replace podcasting? Well, as brilliant as it is, it does not generate insightful opinions about the value or applicability of the technology; it does not discuss whether this will be successful or require further development before it is adopted. It only discusses the PDF from the perspective of the paper itself, augmented by the LLMs general understanding of the topic. It is also not guaranteed to be accurate, it does not have any real understanding of the content, it is not significantly more clever than OpenGPT or any other LLM.It seems to be much like RAG + LLM. A Retrieval-augmented generation (RAG) is an AI framework for improving the quality of LLM-generated responses by grounding the model on external sources of knowledge. Vectors are produced that aid in the LLMs focus of knowledge so the system can use an LLM but have localised, specialised responses. In the case of NotebookLM, this is then filtered and presented via a conversation between two inferred commentators.Test Drive (What NotebookLM said about this article).Summary: The source discusses a new AI tool called NotebookLM, which uses a large language model (LLM) to summarise and discuss scientific research papers in a conversational format. It compares this tool to other AI frameworks like RAG (Retrieval-Augmented Generation) and explores potential impacts on the VFX industry. While recognising the potential for disruption, the source argues that these technologies may create new opportunities by enabling technical artists to better understand complex subjects and lead to the creation of novel visual experiences. The author emphasizes the need for VFX professionals to adapt and leverage these advancements to ensure their continued relevance and value.Audio Test Drive (This is NotebookLM discussing the article).Here is a NotebookML conversation audio version. Note that it made a mistake in the first minute regarding SIGGRAPH- but this software is labelled as Experimental.https://www.fxguide.com/wp-content/uploads/2024/09/NotebookMLFXG.m4aTest Drive (What NotebookLM said about my 40 page Academic article).https://www.fxguide.com/wp-content/uploads/2024/09/ISR.m4aImpact for VFX?The two voices sound remarkably natural, insanely so. Given the current trajectory of AI, we can only be a few beats away from you uploading audio and having voice-cloned versions so that these base responses could sound like you, your partner, or your favourite podcaster. The technology is presented by Google as an AI collaborative virtual research assistant. After all, the rate of essential advances coming out in this field alone makes keeping up to date feel impossible so a little AI help sounds sensible if not necessary.So why does this matter for VFX? Is this the dumbing down of knowledge into Knowledge McNuggets, or is it a way to bridge complex topics so anyone can gain introductory expertise on even the most complex subject?Apart from the apparent use of complex subjects to make them more accessible for technical artists, how does this impact VFX? I would argue that this or the latest advances from Runways video to video or SORAs GenAI all provide massive disruption, but also, they invite our industrys creativity for technical problem-solving. GenAI videos are not engaging dramas or brilliant comedies. Video inference is hard to direct and complex to piece into a narrative. And NotebookLM will be hard-pushed to be as engaging as any two good people on a podcast. But they are insanely clever new technologies, so they invite people like VFX artists to make the leap from technology demos to sticky, engaging real-world use cases. My whole career I have seen tech at conferences and then discussed with friends later, I cant wait to see how ILM/Framestore/WetaFx will use that tech and make something brilliant to watch. As an industry, we are suffering massive reductions in production volume that are hurting many VFX communities. I dont think this is due to AI, but in parallel to those structural issues, we need to find ways to make this tech useful. At the moment, it is stunningly surprising and often cool, but how do we use this to create entirely new viewer experiences that people want?It is not an easy problem to solve, but viewed as input technology and not the final solution, many of these new technologies could create jobs. I dont believe AI will generate millions of new Oscar-level films, but I also dont believe it will be the death of our industry. Five years ago, it was predicted wed all be in self-driving cars by now. It has not happened. Four years ago, radiologists would all be out of a job, and so it goes.If we assume NotebookLM is both an incredibly spectacular jump in technology and not going to replace humans what could you use it for? What powerful user experiences could you use it for? Theme park and location-based entertainment? AVP Sport agents/avatars? A new form of gaming? A dog friendly training tool ?AI is producing incredible affordances in visual and creative domains, why cant the Visual Effects industry be the basis of a new Visual AI industry that takes this tech and really makes it useful for people?
    0 Comments 0 Shares 47 Views
  • 0 Comments 0 Shares 230 Views
  • 0 Comments 0 Shares 229 Views
More Stories