fxguide | fxphd
fxguide | fxphd
fxphd.com is the leader in pro online training for vfx, motion graphics, and production. Turn to fxguide for vfx news and fxguidetv & audio podcasts.
2 people like this
35 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • Silo fxpodcast DOP Ed Moore
    www.fxguide.com
    Silo 2Season 2 of Silo on Apple TV+ is set to deepen the intrigue of the underground dystopian world that captivated audiences in its first run. Based on Hugh Howeys based Silotrilogy of novels (Wool,Shift, andDust).Silo has become a flagship show for Apple+, combining compelling visual storytelling with stunning visual effects. The first season introduced audiences to the titular Siloa massive, self-contained underground community where humanity survives after an apocalyptic event. With secrets layered at every level of the structure, the series is as much about the physical environment as it is about the psychological impact of isolation and control.The cinematography in Silo is a critical element in establishing its unique tone. By leveraging tight, claustrophobic framing and dim, moody lighting, the series underscores the oppressive environment of the Silo while reflecting the characters psychological confinement. Alternately, moments of expansive compositions and dynamic camera movement are used to highlight rare glimpses of freedom or rebellion, creating a visual rhythm that mirrors the emotional beats of the story. This meticulous approach elevates the storytelling, ensuring every shot contributes to the immersive and unsettling atmosphere. In this weeks fxpodcast we breakdown the series with one of the shows principle Directors of Photography, Ed Moore, BSC. As with most tentpole high budget series, a set of crews alturnate per episode, Ed Moore and Director Amber Templemore worked on multiple episodes especially in the later half of season 2. In Silo 2, the production design and visual effects work hand-in-hand to create a tangible, oppressive environment. The shows commitment to realism is apparent in every frame, with CG work often invisible but essential to the storys immersive quality. The lead visual effects were by ILM, as Ed Moore discusses in the podcast, while the set was vast, ILM had the task of working closely with the DOPs to extend the set and integrate the visuals. Season 2 promises to expanded the visual and narrative elements of the show, delving deeper into the mysteries of the Silos origins and the larger world outside. The challenge for the creative teams was to balance the shows grounded aesthetic with the need to broaden the visual scope of the show, showing more of the post-apocalyptic landscape beyond the Silos walls.From a technical perspective, Silo stands out for its restrained yet precise use of visual effects combined with dark and complex visual plate photography. An example is the opening shot of Season 2s 5th episode. As Ed Moore discusses in the fxpodcast, the shot was a remarkable combination of a very cleverly designed and executed live action camera move with ILM providing extensive set extension. Ed posted the following clip on Instagram showing the behind the scenes camera department engineering and execution.View this post on InstagramA post shared by Ed Moore BSC (@edmooredop)Ed Moore has provided a set of his own personal black and white photography behind the scenes of season 2. Below is a featurette showing the complex filming sets and extensive production design.
    0 Comments ·0 Shares ·71 Views
  • Gaslight FX Studio fxpodcast SFX Makeup Supervisor Chris Bridges
    www.fxguide.com
    Chris Bridges, is the SFX visionary behind Gaslight FX Studio, a powerhouse in the world of prosthetic makeup and special effects. With nearly 30 years of experience, Chris has carved a niche in the industry, seamlessly blending artistry with cutting-edge technology.As the Head of Department for Prosthetics on Star Trek: Strange New Worlds, and a proud Emmy winner, Chriss expertise is sought after by filmmakers looking to elevate their storytelling through innovative special effects.Gaslight FX Studio recently collaborated on the gripping horror/drama Heretic, which premiered at TIFF and was released last Nov. This project not only showcased Chriss exceptional talent but also exemplifies his commitment to storytelling. The studio crafted a variety of intricate makeup effects, from the hauntingly emaciated look of the Prophets to the striking prosthetic arm gag featuring Hugh Grant.As you can hear in this weeks fxpodcast, what sets Gaslight FX apart is its meticulous approach. Chris and his team conducted an in-depth breakdown of the script, aligning closely with the directors to ensure that every effect served to enhance the films narrative. The prosthetics were designed not merely for shock value but to immerse audiences in the psychological horror of confinement and starvation.Fans eagerly anticipate the release of Season 3 of Star Trek: Strange New Worlds in early 2025, with a fourth season already ordered. Download and listen to our fxpodcast on Gaslight FX Studios and the brilliant work of Chris Bridges, whereever you listen to podcasts.
    0 Comments ·0 Shares ·86 Views
  • www.fxguide.com
    The EE British Academy Film Awards (BAFTAs) are prestigious annual awards presented by the British Academy of Film and Television Arts to honor outstanding achievements in the film industry. Recognized as one of the most significant film awards globally, the BAFTAs celebrate excellence across a range of categories, including acting, directing, writing, cinematography, and production design. The awards were first established in 1947, and the ceremony takes place in London, attracting leading figures from the international film community. Often considered a precursor to the Oscars, the BAFTAs provide a strong indication of what the VFX and animation community believe have made substantial contributions to the art form.Special Visual Effects NomineesBetter ManLuke Millar, David Clayton, Keith Herft, Peter StubbsOur fxpodcast with Director Michael Gracey:Director Michael Gracey & making Better ManDune: Part TwoPaul Lambert, Stephen James, Gerd Nefzer, Rhys SalcombeOur fxpodcast with Visual Effects Supervisor Paul Lambert:Dune: Part Two fxpodcast Visual Effects Supervisor Paul LambertGladiator IIMark Bakowski, Neil Corbould, Nikki Penny, Pietro PontiOur fxguide story with Overall Production VFX Supervisor: Mark BakowskiGladiator II: Mark Bakowski Overall VFX Sup. and RidleygramsKingdom of the Planet of the Apes Erik Winquist, Rodney Burke, Paul Story, Stephen UnterfranzOur fxpodcast with Director Wes Ball and VFX Supervisor Erik Winquist:fxpodcast #369: Kingdom Of the Planet of Apes with Director Wes Ball and VFX Supervisor Erik WinquistWickedPablo Helman, Paul Corbould, Jonathan Fawkner, Anthony SmithOur fxpodcast with renowned visual effects supervisor Pablo Helman:Pablo Helman breaks down WickedAnimated FilmFlowGints Siibalodis, Matss KaaInside Out 2 Kelsey Mann, Mark NielsenWallace and Gromit: Vengeance Most FowlNick Park, Merlin Crossingham, Richard BeekThe Wild RobotChris Sanders, Jeff HermannThe awards ceremony will take place on 16 February 2025 at the Royal Festival Hall in London. In addition to the awards, BAFTA is an independent arts charit, supporting the arts.
    0 Comments ·0 Shares ·91 Views
  • www.fxguide.com
    OUTSTANDING VISUAL EFFECTS IN A PHOTOREAL FEATUREBetter ManLuke MillarAndy TaylorDavid ClaytonKeith HerftPeter StubbsDune: Part TwoPaul LambertBrice ParkerStephen JamesRhys SalcombeGerd NefzerKingdom of the Planet of the ApesErik WinquistJulia NeighlyPaul StoryDanielle ImmermanRodney BurkeMufasa: The Lion KingAdam ValdezBarry St. JohnAudrey FerraraDaniel FotheringhamTwistersBen SnowMark SoperFlorian WitzelSusan GreenhowScott FisherOUTSTANDING SUPPORTING VISUAL EFFECTS IN A PHOTOREAL FEATUREBlitzAndrew WhitehurstSona PakTheo DemirisVincent PoitrasHayley WilliamsCivil WarDavid SimpsonMichelle RoseFreddy SalazarChris ZehJ.D. SchwalmHorizon: An American Saga Chapter 1Jason NeeseArmen FetulagianJamie NeeseJ.P. JaramilloNosferatuAngela BarsonLisa RenneyDavid ScottDave CookPavel SgnerYoung Woman and the SeaRichard BriscoeCarrie RishelJeremy RobertStphane DittooIvo JivkovOUTSTANDING VISUAL EFFECTS IN AN ANIMATED FEATUREInside Out 2Kelsey MannMark NielsenSudeep RangaswamyBill WatralMoana 2Carlos CabralTucker GilmoreIan GoodingGabriela HernandezThe Wild RobotChris SandersJeff HermannJeff BudsbergJacob Hjort JensenTransformers OneFrazer ChurchillFiona ChiltonJosh CooleyStephen KingUltraman: RisingHayden JonesSean M. MurphyShannon TindleMathieu VigOUTSTANDING VISUAL EFFECTS IN A PHOTOREAL EPISODEFallout; The HeadJay WorthAndrea KnollGrant EverettJoao SitaDevin MaggioHouse of the Dragon; Season 2; The Red Dragon and the GoldDai EinarssonTom HortonSven MartinWayne StablesMike DawsonShgun; AnjinMichael CliettMelody MeadPhilip EngstrmEd BruceCameron WaldbauerStar Wars: Skeleton Crew; Episode 5John KnollPablo MollesJhon AlvaradoJeff CapogrecoThe Lord of The Rings: The Rings of Power; Season 2; EldestJason SmithTim KeeneAnn PodloznyAra KhanikianRyan ConderOUTSTANDING SUPPORTING VISUAL EFFECTS IN A PHOTOREAL EPISODEExpats: HomeRobert BockGlorivette SomozaCharles LabbTim EmeisLady in the Lake; It Has to Do With the Search for the MarvelousJay WorthEddie BoninJoe WehmeyerEric Levin-HatzMike MyersMasters of the Air; Part Three; The Regensburg-Schweinfurt MissionStephen RosenbaumBruce FranklinXavier Matia BernasconiDavid AndrewsNeil CorbouldThe Penguin; BlissJohnny HanMichelle RoseGoran PavlesEd BruceDevin MaggioThe Tattooist of Auschwitz; PilotSimon GilesAlan ChurchDavid SchneiderJames HattsmithOUTSTANDING VISUAL EFFECTS IN A REAL-TIME PROJECT[REDACTED]Fabio SilvaMatthew ShermanCaleb EssexBob KopinskyDestiny 2: The Final ShapeDave SamuelBen FabricEric GreenliefGlenn GambleStar Wars OutlawsStephen HawesLionel Le DainBenedikt PodlesniggBogdan DraghiciWhat If? An Immersive StoryPatrick N.P. ConranShereif FattouhZain HomerJax LeeUntil DawnNicholas ChambersJack Hidde GlavimansAlex GaborOUTSTANDING VISUAL EFFECTS IN A COMMERCIALYouTube TV NFL Sunday Ticket: The Magic of SundayChris BayolJeremy BrooksLane JollyJacob BergmanDisney; Holidays 2024Adam DroyHelen TangChristian Baker-SteeleDavid FleetVirgin Media; Walrus WhizzerSebastian CaldwellIan BerryBen CroninAlex GreyCoca-Cola; The HeroesGreg McKneallyAntonia VlastoRyan KnowlesFabrice FiteniSix Kings Slam; Call of the KingsRyan KnowlesJoe BillingtonDean RobinsonGeorge SavvasOUTSTANDING VISUAL EFFECTS IN A SPECIAL VENUE PROJECTD23; Real-Time RocketEvan GoldbergAlyssa FinleyJason BrenemanAlice TaylorThe Goldau Landslide ExperienceRoman KaelinGianluca RavioliFlorian BaumannMTV Video Music Awards; Slim Shady LiveJo PlaeteSara MustafaCameron JacksonAndries CourteauxTokyo DisneySea; Peter Pans Never Land AdventureMichael Sean FoleyKirk BodyfeltDarin HollingsBert KleinMaya VyasParis Olympics Opening Ceremony; RunBenjamin Le SterGilles De LusigmanGerome ViavantRomain TinturierOUTSTANDING CHARACTER IN A PHOTOREAL FEATUREBetter Man; Robbie WilliamsMilton RamirezAndrea MerloSeoungseok Charlie KimEteuati TemaKingdom of the Planet of the Apes; NoaRachael DunkAndrei CovalJohn SoreNiels Peter KaagaardKingdom of the Planet of the Apes; RakaSeoungseok Charlie KimGiorgio LafrattaTim TeramotoAidan MartinMufasa: The Lion King; TakaKlaus SkovboValentina RosselliEli De KoninckAmelie TalarmainOUTSTANDING CHARACTER IN AN ANIMATED FEATUREInside Out 2; AnxietyAlexander AlvaradoBrianne FranciscoAmanda WagnerBrenda Lin ZhangThe Wild Robot; RozFabio LigniniYukinori InagakiOwen DemersHyun HuhThelma The Unicorn; Vic DiamondGuillaume ArantesAdrien MonteroAnne-Claire LerouxGaspard RocheWallace & Gromit: Vengeance Most Fowl; GromitJo FentonAlison EvansAndy SymanowskiEmanuel NevadoOUTSTANDING CHARACTER IN AN EPISODE, COMMERCIAL, GAME CINEMATIC, OR REAL-TIME PROJECTSecret Level; Armored Core: Asset Management; Mech PilotZsolt VidaPter Krucsaignes VonaEnric Nebleza PaellaDiablo IV: Vessel of Hatred; NeyrelleChris BostjanickJames MaYeon-Ho LeeAtsushi IkarashiDisney; Holidays 2024; OctopusAlex DoylePhilippe MoineLewis PickstonAndrea LacedelliRonja the Robbers Daughter; Vildvittran the Queen HarpyNicklas AnderssonDavid AllanGustav hrenNiklas WallnOUTSTANDING ENVIRONMENT IN A PHOTOREAL FEATURECivil War; Washington, D.C.Matthew ChandlerJames HarmerRobert MooreAdrien ZeppieriDune: Part Two; The Arrakeen BasinDaniel RheinDaniel Anton FernandezMarc James AustinChristopher AnciaumeGladiator II; RomeOliver KaneStefano FarciJohn SeruFrederick ValleeWicked; The Emerald CityAlan LamSteve BevinsDeepali NegiMiguel Sanchez Lpez-RuzOUTSTANDING ENVIRONMENT IN AN ANIMATED FEATUREKung Fu Panda 4; Juniper CityBenjamin LippertRyan PrestridgeSarah VawterPeter MaynezThe Wild Robot; The ForestJohn WakeHe Jung ParkWoojin ChoiShane GladingTransformers One; Iacon CityAlex PopescuGeoffrey LebretonRyan KirbyHussein NabeelWallace & Gromit: Vengeance Most Fowl; AqueductMatt PerryDave Alex RiddettMatt SandersHoward JonesOUTSTANDING ENVIRONMENT IN AN EPISODE, COMMERCIAL, GAME CINEMATIC, OR REAL-TIME PROJECTDune: Prophecy; Pilot; The Imperial PalaceScott CoatesSam BesseauVincent lHeureuxLourenco AbreuDune: Prophecy; Two Wolves; Zimia SpaceportNils WeisbrodDavid AnastacioRene BorstRuben ValenteShgun; OsakaManuel MartinezPhil HanniganKeith MaloneFrancesco CorvinoThe Lord of the Rings: The Rings of Power; Season 2; Doomed to Die; EregionYordan PetrovBertrand CabrolLea DesrozierKaran DhandhaOUTSTANDING CG CINEMATOGRAPHYBetter ManBlair BurkeShweta BhatnagarTim WalkerCraig YoungDune: Part Two; ArrakisGreig FraserXin Steve GuoSandra MurtaBen WiggsHouse of the Dragon; Season 2; The Red Dragon and the Gold; Battle at Rooks RestMatt PerrinJames ThompsonJacob DoehnerP.J. DillonKingdom of the Planet of the Apes ; Egg ClimbDennis YooAngelo PerrottaSamantha ErickstadMiae KangOUTSTANDING MODEL IN A PHOTOREAL OR ANIMATED PROJECTAlien: Romulus; Renaissance Space StationWaldemar BartkowiakTrevor WideMatt MiddletonBen ShearmanDeadpool & Wolverine; Ant-Man ArenaCarlos Flores GomezCorinne DyChris ByrnesGerald BlaiseDune: Part Two; The Harkonnen HarvesterAndrew HodgsonTimothy RussellErik LehmannLouie ChoGladiator II; The ColosseumOliver KaneMarnie PittsCharlotte FargierLaurie PriestOUTSTANDING EFFECTS SIMULATIONS IN A PHOTOREAL FEATUREDune: Part Two; Atomic Explosions and WormridingNicholas PapworthSandy la TourelleLisa NolanChristopher PhillipsKingdom of the Planet of the Apes; Burning Village, Rapids and FloodsAlex NowotnyClaude SchitterFrdric ValleurKevin KelmTwistersMatthew HangerJoakim ArnessonLaurent KermelZheng Yong OhVenom: The Last Dance; Water, Fire & Symbiote EffectsXavi Martin RamirezOscar DahlenHedi NamarYuri YangOUTSTANDING EFFECTS SIMULATIONS IN AN ANIMATED FEATUREKung Fu Panda 4Jinguang HuangZhao WangHamid ShahsavariJoshua LaBrotMoana 2Zoran StojanoskiJesse EricksonShamintha Kalamba ArachchiErin V. RamosThe Wild RobotDerek CheungMichael LosureDavid ChowNyoung KimUltraman: RisingGoncalo CabacaZheng Yong OhNicholas Yoon Joo KuangPraveen BoppanaOUTSTANDING EFFECTS SIMULATIONS IN AN EPISODE, COMMERCIAL, GAME CINEMATIC, OR REAL-TIME PROJECTAvatar: The Last Airbender; Legends; KoizillaIoan BoieriuDavid StopfordPer BalaySaysana RintharamyShgun; Broken to the Fist; LandslideDominic TiedekenHeinrich LweCharles GuertonTimmy LundinStar Wars: Skeleton Crew; Pilot; Spaceship Hillside TakeoffTravis HarkleroadXiaolong PengMarcella BrownMickael RiciottiThe Lord of the Rings: The Rings of Power; Season 2; Shadow and Flame; Balrog Fire and Collapsing CliffKoenraad HofmeesterMiguel Perez SenentMiguel Santana Da SilvaBilly CopleyThree Body Problem; Judgement DayYves DIncauGavin TemplerMartin ChabannesEloi Andaluz FullOUTSTANDING COMPOSITING & LIGHTING IN A FEATUREBetter ManMark McNichollGordon Spencer de HasethEva SnyderMarkus ReithofferDune: Part Two; Wormriding, Geidi Prime, and the Final BattleChristopher RickardFrancesco DellAnnaPaul ChapmanRyan WingKingdom of the Planet of the ApesJoerg BruemmerZachary BrakeTim WalkerKaustubh A. PatilThe Wild RobotSondra L. VerlanderBaptiste Van OpstalEszter OffertalerAustin CasaleOUTSTANDING COMPOSITING & LIGHTING IN AN EPISODEShgun; Broken to the Fist; LandslideBenjamin BernonDouglas RoshamnVictor KirschCharlie RaudStar Wars: Skeleton Crew; Episode 6; JawsRich GrandeTomas LefebvreIan DodmanRey ReynoldsThe Boys; Season 4; Life Among the SepticsTristan ZerafaMike StadnyckyjToshi KosakaRajeev BRThe Penguin; After HoursJonas StuckenbrockKaren ChangEugene BondarMiky GirnOUTSTANDING COMPOSITING & LIGHTING IN A COMMERCIALVirgin Media; Walrus WhizzerSebastian CaldwellAlex GreyKanishk ChouhanShubham MehtaCoca-Cola; The HeroesRyan KnowlesAlex GabucciJack PowellDan YarcigiCorcept; MarionetteYongchan KimArman MatinYoon BaeRajesh KaushikDisney; Holidays 2024Christian Baker-SteeleLuke WarpusPritesh KotianJack HarrisOUTSTANDING SPECIAL (PRACTICAL) EFFECTS IN A PHOTOREAL PROJECTBlitzHayley WilliamsDavid EvesAlex FreemanDavid WatsonConstellationMartin GoeresJohara RaukampLion David BogusLeon MarkThe Penguin; Safe GunsDevin MaggioJohnny HanCory CandrilliAlexandre ProdhommeEMERGING TECHNOLOGY AWARDDune: Part Two; Nuke CopyCatBen KentGuillaume GalesMairead GroganJohanna BarbierFuriosa: A Mad Max Saga; Artist-driven Machine Learning CharacterJohn BastianBen WardThomas RowntreeRobert BeveridgeHere; Neural Performance ToolsetJo PlaeteOriel FrigoTomas KoutskyMatteo Oliviero DancyMufasa: The Lion King; Real-Time Interactive Filmmaking, From Stage To PostCallum JamesJames HoodLloyd BishopBruno PedrinhaThe Penguin; Phase Synced Flash-Gun SystemJohnny HanJefferson HanJoseph MenafraMichael PynnOUTSTANDING VISUAL EFFECTS IN A STUDENT PROJECTDawn (entry from ESMA cole Suprieure Des Mtiers Artistiques)Noah MercierApolline RoyerLorys StoraMarie PradeillesStudent Accomplice (entry from Brigham Young University)Spencer BlanchardLisa BirdAnson SavageKiara SpencerPittura (entry from ARTFX Schools of Digital Arts)Lauriol AdamLassre TitouanVivenza RmiMarre HellosCourage (entry from Supinfocom Rubika)Salom CognonMargot JacquetNathan BaudryLise Delcroix
    0 Comments ·0 Shares ·55 Views
  • Gladiator II: Mark Bakowski Overall VFX Sup. and Ridleygrams
    www.fxguide.com
    The much-anticipated sequel to Ridley Scotts iconic Gladiator, Gladiator II promised to transport audiences back to the grandeur of ancient Rome, balancing epic scale with emotional storytelling via superb VFX. At the heart of this ambitious endeavor is the visual effects team, led by Mark Bakowski, Overall VFX Supervisor from Industrial Light & Magic (ILM). With a legacy of crafting unforgettable cinematic worlds, ILM took on the monumental task of recreating the visceral authenticity and historical depth that made the original film so beloved. ILM was aided by Framestore, who was also a major contributor. Framestore delivered 136 carefully crafted shots, including the armoured rhino,12 bloodthirsty baboons and a hauntingly stylized River Styx environment.The wildfires have torn through the Los Angeles area this week, and our hearts go out to everyone affected; the devastation has been horrendous. It has also led to numerous closures, cancellations and postponements including the Oscar Bakeoff nominations. The traditional VFX bakeoff was originally scheduled for Jan. 11th, but the event will now be done virtually. Additionally, the announcement of nominees for the 97th Academy Awards has been delayed 2 days until Jan. 19, with nominations voting extended to Jan. 14. As part of our week of Bakeoff coverage, we spoke to Mark Bakowski in fire-ravaged LA, prior to him flying back to northern California.RidleygramsRidleygrams are Ridley Scotts iconic hand-drawn storyboards, which he uses as a creative tool to visualize and communicate his vision for a film. These sketches are a hallmark of Scotts filmmaking process, reflecting his background in art and design. While not overly detailed, Ridleygrams capture the mood, composition, and framing of key scenes, offering a dynamic and impressionistic blueprint for the film.Ridleygrams from Ridleys Instagram accountAs any VFX supervisor will testify, one of the most distinctive aspects of working with Scott is his use of Ridleygrams. Ridleys sense of perspective and composition is incredible, said Bakowski. I actually just got a book from him about a week ago- a leather-bound book of all his Ridleygrams from the movie, they beautifully track his vision from concept to the final shots. Scotts approach is often spontaneous, drawing inspiration from the environment. Whats amazing is how spontaneous his creativity can be, he explains. Hell spot something on the way to Setmist on the hills in Malta, for exampleand doodle on top of that to say, This is our Rome shot. His Ridleygrams arent just planning tools; they are an extension of how he thinks visually.Scott often sketches these directly on set or during pre-production to guide the crew, especially the cinematographer, production designers, and VFX crew, to ensure everyone clearly understands his creative intent. Their simplicity and immediacy make them a practical and flexible tool for collaboration, helping bridge the gap between abstract concepts and tangible cinematic execution.Ridleys sketches naturally convey depth and perspective, and the director is deeply focused on layering contrast and composition in his imagery. Hell dress a scene with incredible attention to detaileven down to positioning stunt performers based on their look, and who has the most character.That depth-building extends through his multi-camera setups. Our job is to translate his vision into something that feels both epic and authentic, ensuring that every layer of the frame contributes to the illusion of a lived-in world.(Sidenote: There is an upcoming art & making of Gladiator II from @abramsbooks featuring Ridleygrams, to be published in March 2025.)Numidia AttackSet 16 years after the death of Marcus Aurelius, Ridley Scotts Gladiator II begins with an unforgettable naval assault on the city of Numidia. This sequence perfectly encapsulates the ambitious visual effects approach of the film. Shot in the arid desert of Morocco, the scene utilized the Jerusalem set from Scotts Kingdom of Heaven, repurposed and adapted to evoke the grandeur of Numidia. To create the illusion of a Roman fleet engaging in battle, two 150-foot-long practical ships were mounted on transporter rigs by the Special Effects (SFX) team, allowing for dynamic on-set movements. These ships were later completed digitally, with the surrounding ocean and dramatic skies crafted entirely in post-production. Smoke, fire, explosions, and a hail of arrows added layers of intensity, while Ridley Scotts ability to art direct the ocean ensured the sequence met his exacting vision.Final shot combining dry for wet, sky replacement and built on vast practical setsThe battle culminates in a striking scene where Lucius, the films hero, finds himself at the mythical River Styx. This sequence by Framestore, combined practical and digital elements seamlessly. A small beach set was constructed on a soundstage, complemented by digital environment extensions, including a mesmerizing liquid-metal mercury simulation for the rivers supernatural qualities. When Lucius awakens on a practical beach shot in Numidia, the wider and reverse angles relied on footage captured in Malta. These disparate locations were unified in post with digital dressing and seamless transitions between the varied environments.ColosseumRome itself became a character in the film, its monumental architecture and vibrant streets brought to life through a combination of practical sets and digital augmentation. The lower sections of the Roman sets were built practically in Malta, while expansive vistas and upper architectural elements were added digitally by ILM. Despite the grandeur of the Roman scenes, extras on set never exceeded 500, necessitating extensive VFX crowd work. The Colosseum, a centerpiece of the original Gladiator, was revisited multiple times throughout the 1,130 visual effects shots in the new film, blending practical builds with complex digital enhancements to recreate its epic majesty. Ridley doesnt like to overcommit to a plan too far in advance, Bakowski explained. He thrives on flexibilityshooting with multiple cameras and finding the moment as it happens. For us, it meant being ready to pivot at any time. A prime example of this adaptability came during the sequence depicting a naval battle in the Coliseum. Initially planned as a mix of wet and dry shooting, the sequence was disrupted by the actors strike, leaving the team to complete it entirely dry on set. By working closely with the stunts team and leveraging post-production VFX, ILM seamlessly transformed the footage into a believable, high-stakes naval battle.As a result, this battle became one of the most technically demanding sequences in the Colosseum. Shot predominantly on dry land, it required extensive CG water, mast and sail extensions, and integration of long, uninterrupted takes captured from up to 12 cameras. The productions limited use of blue screens presented unique challenges, requiring heroic feats of paint, rotoscoping, and tracking to integrate the live-action footage with CG elements seamlessly. This approach had both advantages and drawbacksoffering more natural on-set lighting and interaction but demanding a Herculean effort from the paint and roto team.Ridley Scott has a natural sense of perspective and depth, comments Bakowski. He understands the depth of a scene. He often jokingly talks about how he invented smoke in movies in terms of depth layering and so on. As the Director loves to build contrast and composition in the way he constructs his images, he will individually dress to camera each of his multiple cameras when shooting. Often hell shoot with eight cameras as a regular thing, maybe 10, or 12. He will dress each camera by the foreground .. saying Okay, I want that stunt man whos got the bigger muscles, that guy has got the nice big nose, hes got character, et cetera, and hell build his scene back, in depth, into the frame. CreaturesIn addition to large-scale environments, the film also featured an array of CG creatures, from sharks and tigers to birds, horses, and even a rhino. One of the standout achievements was the creation of a full-sized white rhino rig, mounted on wheels and radio-controlled for practical interaction and lighting reference. VFX artists then replaced the rig with a fully digital rhino, whose detailed musculature, thick layers of fat, and sliding skin conveyed a sense of weight and momentum. Similarly, Ridley Scotts fascination with a nature documentary featuring a baboon with alopecia inspired the creation of a unique, digitally crafted baboon that became a pivotal character in the story.Final Army ConfrontationThe films climactic finale, set on the outskirts of Rome, showcased the VFX teams ability to unify diverse elements. Split between locations in Malta and the UK, the sequence involved digital extensions to blend the two settings, the addition of armies on both sides of the conflict, and a dramatic horse crash that required intricate CG work. As the culmination of the films 1,130 VFX shots, the finale exemplified the marriage of practical and digital techniques that defined Gladiator II. Through the collaboration of Ridley Scott, Mark Bakowski, and the VFX teams, the film not only honored the legacy of its predecessor but also elevated the art of visual effects in historical epics.
    0 Comments ·0 Shares ·112 Views
  • The VFX of Alex Garlands Civil War
    www.fxguide.com
    Civil War takes audiences into a fractured America, exploring themes of division and unity against a reimagined Washington, D.C. The VFX approach focused on photorealism and narrative integration. Framestores David Simpson, as Production VFX Supervisor, collaborated closely with director Alex Garland, DOP Rob Hardy, and Production Designer Caty Maxey to achieve the films gritty, documentary-style aesthetic. The challenge went beyond the scale of the VFXit was about crafting a city that felt alive and in conflict. Framestores ability to push technical boundaries while preserving emotional depth makes Civil War a standout in this years VFX-driven films.Everything had to feel grounded and lived in, says Simpson. Anything remotely Hollywood was immediately kicked into touch, and we focused instead on news and documentary footage. It made the research and reference-gathering stage pretty heavy going at times, but the aim was to create a realistic and utterly unsanitized view of war unfolding. Framestores key aim was to augment and extend the real-life action while enhancing those aspects that would not be safe or practical to accomplish on set or out on location. The first rule was that, wherever possible, everything should be captured in camera, says Simpson.Framestore digitally build Washington DCThe visual effects for Civil War were a monumental achievement, delivered by a dedicated team of 175 artists at Framestore over the course of 406 days. The scope of their work was staggering, involving the meticulous digital recreation of Washington, D.C., for the films intricate and immersive storytelling.Washington DC: Street by streetGarland, Simpson and Hardy conducted recces of Washington DC to gauge the city and inform plans to find locations in Atlanta that might double for the US capital. With no suitable stand-ins found, partial sets were constructed and the VFX team were charged with building out the environments. In those scenes involving close-quarters clashes, youre seeing a blend of CG and practical sets, says Simpson. Everything above the ground floor of those buildings has been built in CG.The team brought to life 13 miles of the city, complete with 75 iconic landmarks, such as the Capitol and Washington Monument, to provide an authentic and recognizable backdrop. This ambitious endeavor required not only a deep understanding of the citys geography but also an exceptional level of detail to ensure the audience felt fully immersed in the environment. In addition to its landmarks, the digital cityscape included 887 miles of road, 3,736 buildings, and a host of smaller yet essential details. The artists modelled 64,191 street lamps, 1,160 traffic lights, and 17,851 trees to populate the urban environment, making it vibrant and believable. The interiors were just as meticulously crafted, with over half a million pieces of office furniture (519,066 to be precise) designed to add depth to the films narrative settings. This level of detail exemplifies Framestores commitment to crafting visual effects that serve the story while pushing the boundaries of what is possible in digital artistry. The sheer scale and precision of their work set a new benchmark in VFX, showcasing the power of technology and creativity to transport audiences into a meticulously crafted world. In keeping with the brief to create a living, breathing, immersive world for Civil Wars action, the Framestore team went into incredible detail for these environment builds. I was walking home through London one evening, and the citys sheer lack of uniformity struck me, says Simpson. You notice how the offices are lit differently, the fractional changes in design from road to road and individual architectural quirks that break up any given street scene. This is what we sought to tap into: the fractional changes in lighting, building out the interiors of the offices so they werent just hollow CG shells but dotted with desk clutter, pot plants and air conditioning units. Your eye doesnt necessarily register all this detail, but your mind does, and it heightens both the believability of the VFX work and the storytelling itself since you get a sense of the citys sudden, Mary Celeste-esque abandonment. War on the streetsThe success of these showcase moments was such that the VFX work expanded ambitiously outwards to the point where Framestore had effectively built a fully CG Washington DC. The city was then dotted with gunfire, digital crowds, flames and columns of smoke to highlight the skirmishes and clashes happening at points around the city. This work helped give a sense of scope and expanse, amplifying the story by highlighting how the drama extended far beyond what was happening to the films main cast. Rendering the whole city in CG also had some more practical creative benefits, as Simpson explains: building Washington ourselves meant we could achieve shots that simply wouldnt be possible or practical in real life, he says. The US Secret Service are obviously somewhat cagey about people flying helicopters close to the White House, for example if we wanted to do that, wed need an armed agent onboard, and theres no way we could have gotten as close as we did. Theres also the issue of the Lincoln Memorial, which, for practical and ethical reasons, we couldnt blow up!If the photoreal cityscapes and internal design of office blocks added to the worldbuilding, so, too, did the finer details. Everything was based on real-life footage, explains Simpson. We had de facto casting sessions for our explosions, using newsreels, citizen journalism and footage from weapons tests. We also needed to know what kind of damage these explosions would do, to streets, to buildings and, most harrowingly, to people. The aim was to strip away any sense of showiness, rendering the reality of war in all its stark, horrifying detail. Debuting at SXSW, Civil War was hailed as one of the years most startling and visceral films. Framestore was an early collaborator, identifying and deploying the best techniques, technologies and methodologies to achieve Director Alex Garlands distinctive and uncompromising vision and delivering some 1,000 VFX shots via its studios in London and Mumbai.
    0 Comments ·0 Shares ·96 Views
  • MPC: Crafting Mufasa: The Lion King.
    www.fxguide.com
    As part of our ongoing coverage of films in the lead-up to the Oscar Bake-off on January 11, we highlight the VFX of Barry Jenkins Lion King prequel Mufasa: The Lion King. MPC, a Technicolor Group company, has a long history of bringing to life unforgettable characters and crafting classic worlds, balancing animation with VFX. From The Jungle Book to Maleficent and The Lion King, MPCs has helped contribute to many new chapters in VFX and cinematic expression. This year, MPCs team brought Disneys African savannah to life in Mufasa: The Lion King. The film was directed by Barry Jenkins, the Production VFX Supervisor was Adam Valdez, and Animation Supervisor was Daniel Fotheringham. MPCs VFX Supervisor was Audrey Ferrara, and VFX Producer was Georgie Duncan. A team of over 1,700 artists, production crew, and technologists at MPC crafted the VFX for Mufasa: The Lion King. To create a blueprint for our final animation our motion capture shoot involved multiple performers playing Mufasa, Sarabi, Taka, Rafiki and even Zazu says Production VFX Supervisor Adam Valdez. They wore motion capture suits, and through a process we call QuadCap, their movements were mapped onto digital lion characters. This technology aligned the performers head and spine movements to the lions head and neck, their legs to the lions front legs, and simulated the lions back legs and hips to follow. Barry could then watch a live feed of the lions on screen through Unreal Engine, allowing him to give real-time performance and camera notes to the actors and DP. Within Unreal Engine, a total of 12680 on-stage takes were shot using the VCam and Motion Capture systems, all of which produced both a recorded and rendered artifact. If you were to play all captured takes one after the other, it would take just under 200 hours, or just over 8 days, to watch them all. A total of 7399 live motion capture and Quad Cap performances were captured throughout the shoot.We added facial expressions and lip sync to enhance characters performances. says Animation Supervisor Dan Fotheringham, then Barry could walk through the recorded sequence in VR making notes before we could wrap it up to be shot virtually. Performances with complex animal mechanics, like fighting or leaping were then integrated into the Quadcap performances and into the virtual sets, creating a complete master scene that Barry and James could shoot from any angle. This innovative blending of live-action filmmaking techniques and visual effects became a defining feature of the entire production. Building Africas Wild MajestyTo authentically recreate the African savannah, MPCs Environment team embarked on scouting trips across the continent, gathering invaluable references for the films flora, fauna, and landscapes. Armed with photogrammetry scans and painstaking hand-sculpting techniques, the team meticulously built the world of Mufasa, crafting plains, canyons, and forests down to the finest detail, including individual rocks and blades of grass.A crucial step in the process was optimizing the rendering of the vast environments. To handle the scale of these huge environments, the team had to develop new tools to increase efficiency. The landscapes became integral to telling Mufasas journey, serving as much a geographic as a spiritual backdrop. The result is a sprawling digital recreation of Africa spanning 107 square milesthe size of Salt Lake City, Utah. Custom scattering tools were used to populate rivers, trees, and plants in ways that mimicked the complexity of natural ecosystems. These breathtaking environments serve as both a stage and a character, guiding Mufasa on his epic journey. Breathing Life Into the Animal KingdomMPCs acclaimed Character Lab took the lead in bringing over 118 unique animals to life, including beloved characters like Mufasa, Scar, Pumbaa, Timon, and Rafiki. Each character was constructed from the ground up, beginning with an anatomically accurate bone and muscle structure. From there, the team used their proprietary grooming system, Loma, to create lifelike fur with unparalleled detail and realism.The challenge with fur is not just making it look realistic, but ensuring it moves and reacts naturally, says MPCs Character Supervisor. This was achieved through a combination of advanced simulation techniques and meticulous fine-tuning. Each lion had over 30,000,000 hairs to achieve the realistic look of fur. Mufasas mane alone had 16,995,454 hair curves. Mufasa has 600,000 hairs on his ears, 6.2 million hairs on his legs, and 9 million hairs covering the middle portion of his body. Simulating realistic lion manes for the assembled loins required 40,000 80,000 dynamic curves per character, with custom presets for different weather and physical conditions. Long shots took up to a week per iteration, with final fur caches exceeding 800 GB.To capture the emotional subtleties of the characters, animators studied hours of real animal footage. Muscle movement, posture, and even subtle shifts in expression were analyzed and replicated to ensure authenticity. Animators often acted out expressions themselves, using their performances to infuse each character with genuine emotion. This hybrid of research and personal creativity resulted in characters that feel as lifelike and emotionally resonant as their real-world counterparts. This shot of Rafiki making a snow angel required simulating over 620 million snow particles. Dynamic Elements: Simulating Natures ForcesMPCs FX team brought the dynamic forces of nature into the mix, simulating wind, rain, snow, and fire to create a fully immersive world. The FX work was key to grounding the characters in their environment, says Ferrara. Whether it was the golden light filtering through the savannah or a storm rolling across the plains, every element was designed to integrate into the photoreal aesthetic. CG lighters and compositors worked meticulously to perfect the films lighting and mood, ensuring every shot radiated authenticity. Pushing Boundaries in Animation and VFXA team of up to 88 artists across London, Montreal, and Bengaluru brought to life 77 digital sets, including the iconic Pride Rock and new locations Mountains of the Moon and the Tree of Life. Artists built a library of 5,790 assets, including trees, plants, and grass species, which are all featured in the film. The storage requirements for the film reached 25 petabytes which is the equivalent storage of 5.6 million DVDs. Rendering the film in final quality took 150M hours. It would take a single computer 17,123 years to complete. The fully digital world brims with depth, vivid detail, and a cinematic scale that does justice to the legacy of the original film. From its sweeping landscapes to its intricately animated characters, MPCs work on Mufasa: The Lion King represents another achievement in visual 3D animated storytelling.
    0 Comments ·0 Shares ·119 Views
  • VFXShow 291: Better Man
    www.fxguide.com
    In this episode, we dive into the extraordinary VFX of Better Man.Also check out our interview with the films director: Michael GraceyDirector Michael Gracey & making Better Manand also our interview with the team at Wt FX.fxpodcast: Better Man with Wt FXDont forget to subscribe to both the VFXShow and the fxpodcast to get both of our most popular podcasts.This week in our Boy Band lineup is:Matt Wallin * @mattwallin www.mattwallin.com.Follow Matt on Mastodon: @[emailprotected]Jason Diamond @jasondiamond www.thediamondbros.comMike Seymour @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments ·0 Shares ·119 Views
  • Framestore gives Deadpool the Finger(s)
    www.fxguide.com
    The merging of two iconic characters, Deadpool and Wolverine, into a single film created one of the most anticipated cinematic events in 2024. Fans of the X-Men universe were eager to see how the sardonic, irreverent Wade Wilson (Deadpool) would collideboth literally and metaphoricallywith Logans gruff, brooding Wolverine. Bringing this explosive dynamic to life required a complex level of artistry, and at the forefront of this task was the visual effects powerhouse Framestore.No strangers to either the MCU or the merc with the mouth, Framestores creatives brought skill, flair and high-octane action to this hotly-anticipated box office smash. Work included the ambitious oner in which Deadpool and Wolverine cut a bloody swathe through the rampaging Deadpool Corps; Cassandras skin-crawling telekinetic powers and CG augmentation to the beloved Dogpool. In addition to final VFX and animation, Framestore Pre-Production Services (FPS) delivered the entirety of the films previs work, led by Visualisation Supervisor Kaya Jabar.Framestore provided pre-production servicespreviz, on-set support, techviz, and postvizto final VFX. Overseen by VFX Supervisors Matt Twyford (Loki), Robert Allman, Arek Komorowski, and Joo Sita, the VFX team delivered 420 supremely complex shots, building on the groundwork laid by Framestores Pre-Production team (FPS), led by Senior Visualisation Supervisor Kaya Jabar. The FPS team was the films sole pre-production services collaborator, delivering 900+ shots. The VFX team contributed to several key sequences, including the brutally funny opening title and the gripping third act. This involved executing the cameo-filled Deadpool corp and Oner sequences and orchestrating the climactic destruction of the Time Ripper machine, animating Dogpools performance, creating dozens of digi-doubles, building an expansive urban environment, and delivering a plethora of VFX, ranging from a hand passing through Paradoxs head, gallons of blood simulations, plasma energy, and Framestores signature spaghettification FX. Deadpool & Wolverine was directed by Shawn Levy from a screenplay he wrote with Ryan Reynolds, Rhett Reese, Paul Wernick, and Zeb Wells. The Production Visual Effects Supervisor was Swen Gillbergand, and the Visual Effects Producer was Lisa Marra. As part of our ongoing coverage of films in the lead-up to the Oscar Bake-off on January 11, we spoke with Framestores London Visual Effects Supervisor Matt Twyford.In our fxguide breakdown, Matt Twyford dived into two of the films most standout VFX moments: the surreal fingers through the face sequence and the scene-stealing CGI enhancements for Peggy, the films ugliest-yet-most-adorable dog. Both moments showcased Framestores ability to merge technical precision with artistic flair, creating visuals that were not only technically complex but also deeply connected to the films story and characters.The Fingers Through the Face SequenceIn VFX shot that felt utterly unprecedented, the scene where Cassandras hand penetrates a characters face was a defining moment in Deadpool & Wolverine,. According to Framestores VFX Supervisor Matt Twyford, the sequence was always envisioned as a visceral, photorealistic effect.Our only reference was a single comic book frame, Twyford explained. It was just enough to give us the creative direction we neededfingers intersecting through the skin but not the bone, interacting with facial orifices like the mouth and eye sockets. The challenge was making this look completely real on a fully performing actor. The team began by building an ultra-high-resolution digital double of the actor using a Clear Angle scan rig, capturing every detail down to individual pores. The system has over a hundred lights instead of just a single flash, youve now got the controlled light direction, Twyford explains. So its a 180 frame different light angle, light burst, which allows you then to get all the normals and extract the normal mats from the skin down to poor level Twyford commented that it was very easy to work with Clear Angle Studios. Weve got a close link with them. They fed back to us, we fed back to them, but the quality of the material was stunning. Our first models were just absolutely amazing for the faces.But the real magic lay in animating the interaction between the hand and the facial skin. Framestore decided to move much of the skin distortion work from simulation into animation, allowing the team to preview results earlier in the production process and ensure client confidence.We didnt want to wait until the back end to show results, Twyford noted. By animating the skin distortion early on, we could present rough renders to the client quickly, gaining approval on the gross motion before adding finer details like tension maps and subsurface effects.The final result? A mesmerizing blend of realism and stylization, where skin stretched and reacted with unsettling believability. Subtle details, like hemoglobin redistribution based on skin tension, added to the illusion. It was a balance between anatomical accuracy and Marvel-esque visual logic, Twyford said. The goal was to make it visually stunning without breaking the audiences immersion.Peggy: The Scene-Stealing Dogpool CanineOn the lighter side of the film, Peggya real-life dog and proud titleholder of the UKs ugliest dogstole the show. Framestore worked extensively with Peggy on set, capturing her unique look and mannerisms, including her perpetually hanging tongue, to ensure authenticity in her digital double.Peggy performed brilliantly on set, Twyford said. She did the stunts, the jumps, she worked with blue screens and even handled pyro effects like a pro. She was fantastic. While 90% of Dogpool was live-action, for certain sequences, such as adding goggles and enhancing her expressions, the team relied on her digital counterpart. The VFX team designed and rendered Dogpools goggles late in production, adding a playful yet functional element to her character. Using Framestores proprietary renderer, Freak, they achieved realistic fur interactions and magnified reflections in the goggles, all while maintaining Peggys signature look. The goggle idea actually only came really quite late on, probably less than two months before delivery, he recalls. There was a little discussion about, lets magnify her eyes, lets have that slightly googly eyeglasses effect. And because we are using the physics-based render, we had to do that using the thickness of the glass and the refractive index, which actually proved to be, as you can imagine, very difficult to control, but it is all through the renderer. The goggles allowed us to amplify her personality, Twyford explained. Frmestore kept her eyes true to her natural appearance but gave them life with subtle animations, letting her emotionally connect with the audience in key moments. Crafting Iconic MomentsFor Framestore, Deadpool & Wolverine was a perfect platform to showcase the teams ability to blend creativity and cutting-edge technology. While the film was filled with massive set pieces and intricate effects, it was the smaller, character-driven momentslike Cassandras surreal power display and Peggys charming anticsthat stood out.As Twyford put it, What Framestore loves doing, what we love doing, is the different stuff. Thats what we put ourselves forward for. And because thats what our artists love, they all demand the tricky stuff and the hard stuff. In the end, it was difficult and hard and very complex, but it was also very Marvel!Framestore actually has four films shortlisted for the 2025 Best Visual Effects Oscar: Wicked, Civil War, Deadpool & Wolverine and Gladiator II. These films showcase everything from world-dominating fantasy, epic character animation, blockbuster superhero action and gritty, documentary-style realism, these projects highlight the breadth of Framestores creative and technological skills. Wicked, Deadpool & Wolverine, Civil War and Gladiator II all be presented at the bake-off stage on January 11, 2025. The five final nominations will be announced on January 17. Framestores upcoming slate of projects includesHow To Train Your Dragon, Wicked pt. 2, The Wheel Of Time s3, Bridget Jones: Mad About The Boy, The Gorge, F1 and Three Bags Full.
    0 Comments ·0 Shares ·117 Views
  • Dune Part Two: Paul Lambert, Visual Effects Supervisor
    www.fxguide.com
    As part of our ongoing coverage of films in the lead-up to the Oscar Bake-off on January 11, 2025, we talk to Paul Lambert, the films Oscar-winning Visual Effects Supervisor, about the incredible epic masterpiece: Dune Part Two.Dune Part Two: Following the destruction of House Atreides by House Harkonnen, Paul Atreides unites with Chani and the Fremen while seeking revenge against the conspirators who destroyed his family. Facing a choice between the love of his life and the universes fate, he must prevent a terrible future only he can foresee.Director Denis Villeneuve with Timothe Chalamet on setDirector Denis Villeneuve with DOP Greig FraserDune Part Two was directed and produced by Denis Villeneuve, who co-wrote the screenplay with Jon Spaihts. Paul Lambert was the production VFX Supervisor and he joins the fxpodcast this week to discuss the film in depth.Director Denis Villeneuve with Rebecca Ferguson on setDNEG led the visual effects, along with several other companies such as RodeoFX, Territory Studio, Wylie Co. and ReDefine. Australian cinematographer Greig Fraser was the DOP. As Paul discusses in the fxpodcast, the film builds on the Dune Part One, which garnered ten nominations at the 94th Academy Awards, including Best Picture, and went on to win a leading six awards for Best Original Score, Best Sound, Best Cinematography, Best Production Design, Best Film Editing, and of course: Best Visual Effects.As we discuss, DOP Greig Fraser shot the grand fight that occurs early on in the film as black-and-white infrared. He used an Alexa LF camera that had been modified and then VFX matched to that.
    0 Comments ·0 Shares ·100 Views
  • Part 2: Better Man: Wt FX
    www.fxguide.com
    In the second part of ourBetter Man coverage, we speak with Wt FXs VFX supervisor, Luke Millar and animation supervisor, Dave Clayton. This is part of our ongoing coverage of films in the lead-up to the Oscar Bake-off on January 11, 2025. The five final nominations will be announced on January 17.Original on set footageThe transition from on set to finalThe finished frameBetter Man is an independent film about the life of UK Rock star Robbie Williams. Wt was not only responsible for the incredible animation of Robbie as a monkey throughout the film, but as Director Michael Gracey explained in last weeks Part One fxpodcast, they were key to getting the film made and green kit in the first place.Creature pass with the digital groomRobbie Williams actual eyes were scanned and digitally combinedThe final render and comp. of the closing shot of the filmOne of the most complex scenes was the 500-person Regent St Rock DJ Dance sequence, which nearly did not happen, as the team explains in this episode of the fxpodcast.As shot on the actual streets of LondonDigital RobbieThe final shotPlus:listen to the fxpodcast to hear how Wt FXs VFX supervisor, Luke Millar ended up as a digital extra 767 times in the film!
    0 Comments ·0 Shares ·114 Views
  • Director Michael Gracey & Making Better Man
    www.fxguide.com
    Better Man is a 2024 bio-musical film co-written, produced and directed by Michael Gracey, based on the life of British pop singer Robbie Williams. We spoke with Michael to discuss the film, its complexity and challenges and how he approached this innovative and powerful independent film.In this epsiode we discus the Regent Street sequence, above is just the start of that sequence, and we have an even more detailed breakdown coming, later this week, when we will post our fxpodcast interview with Wt FXs VFX supervisor, Luke Millar and animation supervisor, Dave Clayton. Better Man is part of our VFX Bakeoff series in the lead up to the Oscar Nomination voting in January.
    0 Comments ·0 Shares ·118 Views
  • Pablo Helman breaks down Wicked
    www.fxguide.com
    Dive behind the emerald curtain in our latest fxpodcast episode as we explore the VFX and making of WICKED (Part1). We talk with renowned visual effects supervisor Pablo Helman, whose groundbreaking work brings the enchanting world of Oz to life. From reimagining iconic characters to creating dazzling, immersive landscapes, Helman reveals the creative challenges and innovative techniques that transformed this beloved musical into a cinematic spectacle.Pablo was the visual effects supervisor. The lead visual effects house was ILM along with Framestore. In this episode, he discusses his approach and the complexity of doing a musical and gives an insight into the filmmaking process of director Jon M. Chu.
    0 Comments ·0 Shares ·109 Views
  • VFXShow 290: Wicked
    www.fxguide.com
    In this episode, we dive into the extraordinary visual effects of Universal Pictures Wicked, directed by Jon M. Chu. As a prequel to The Wizard of Oz, the film not only tells the origin story of Elphaba and Glindatwo witches whose lives take drastically different pathsbut also transports audiences to a vividly reimagined Oz.Join us as we discuss the VFX that brings Ozs animals, spellbinding magic, and the stunning world of Shiz University to life. We break down the challenges of creating the films effects for pivotal scenes, such as the gravity-defying flight sequences and the Wizards grand, mechanical illusions. Plus, we discuss how the visual effects team balanced fantastical elements with the emotional core of the story, ensuring the magic of Oz feels both epic and personal.In this VFXShow, we discuss and review the film, but later this week, we will also post our interview with Wickeds VFX supervisor, Pablo Helman.Dont forget to subscribe to both the VFXShow and the fxpodcast to get both of our most popular podcasts.This week in OZ:Matt Lion Wallin * @mattwallin www.mattwallin.com.Follow Matt on Mastodon: @[emailprotected]Jason Scarecrow Diamond @jasondiamond www.thediamondbros.comMike Tin Man Seymour @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments ·0 Shares ·102 Views
  • The issue of training data: with Grant Farhall, Getty Images
    www.fxguide.com
    AsChief Product Officer, Grant Farhall is responsible for Getty Images overall product strategy and vision. We sat down with Grant to discuss the issue of training data, rights and Getty Images strong approach.Training dataArtificial Intelligence, specifically the subset of generative AI, has captured the imagination and attention of all aspects of media and entertainment. Recent rapid advances seem to humanize AI in a way that has caught the imagination of so many people. It has been born from the nexus of new machine learning approaches, foundational models, and, in particular, advanced GPU-accelerated computing, all combined with impressive advances in neural networks and data science.One aspect of generative AI that is often too quickly passed over is the nature and quality of training data. It can sometimes be wrongly assumed that in every instance, more data is good any data, just more of it. Actually, there is a real skill in curating training data.Owning your ownGenerative AI is not limited to just large corporations or research labs. It is possible to build on a foundation model and customize it for your application without having your own massive AI factory or data centre.It is also possible to create a Generative AI model that works only on your own training data. Getty Images does exactly this with its iStock, Getty Creative Images, and API stock libraries. These models are trained on only the high-quality images approved for use, using NVIDIAs Edify NIM built on Picasso.NVIDIA developed the underlying architecture. Gettys model is not a fine-tuned version of a foundational model. It is only trained on our content, so it is a foundational model in and of itself Grant Farhall, Getty ImagesGetty produces a mix of women when prompted with Woman CEO, closeup.BiasPeople sometimes speak of biases in training data, and this is a real issue, but data scientists also know that carefully curating training data is an important skill. This is not an issue of manipulating data but rather providing the right balance in the training data to produce the most accurate results. As part of the curation process is getting enough data of the types needed and often with metadata that helps the deep learning algorithms.Particularly the nature of what data exists in the world already and the qualities of that data that can be used to make the most effective generative AI tool. At first glance, one might assume you just want the greatest amount of ground truth or perfect examples possible, but that is not how things actually work in practice.It is also key that the output responses to prompts provide a fair and equitable mix, especially when dealing with people. Stereotypes can be reinforced without attention to output bias.ProvenanceIt is important to know if the data used to build the generative AI model was licensed and approved for this use. Many early academic research efforts scraped the Internet for data since their work was non-commercial and experimental. We have since come a long way in understanding, respecting, and protecting the rights of artists and people in general, and we have to protect their work from being used without permission. As you can hear in this espisode of the podcast, companies such as Getty Images pride themselves on having clean and ethically sourced generative AI models that are free from compromise and artist exploitation. In fact, they offer not only compensation for artists whose work is used as training data but also guarantees, in some cases, indemnifies against any possible future issues over artists rights.The question that is often asked is, Can I use these images from your AI generator in a commercial way, in a commercial setting? Most services will say yes, says Grant Frarhall of Getty Images. The better question is, can I use these images commercially, and what level of legal protection are you offering me if I do? As Getty knows the provenance of every image used to train their model, their corporate customers enjoy fully uncapped legal indemnification.Furthermore, misuse is impossible if the content is not in the training model. Frarhall points out, There are no pictures of Taylor Swift, Travis Kelsey, athletes, musicians, logos, brands, or any similar stuff. None of thats included in the training set, so it cant be inappropriately generated.AI Generator imageRights & CopyrightFor centuries, artists have consciously or subconsciously drawn inspiration from one another to influence their work. However, with the rise of generative AI, it is crucial to respect the rights associated with the use of creative materials.A common issue and concern is copyright and this is an important area, but it is one open to further clarification and interpretation as governments around the world respond to this new technology. As it stands, only a person can own copyright, it is not possible for a non-human to own copyright. It is unclear how open the law is to training on material without explicit permission worldwide, as Generative AI models do not store a copy of the original.However, it is illegal in most contexts to pass off any material in a way that misrepresents, such as implying or stating the work was created by someone who did not. It is also illegal to use the likeness of someone to sell or promote something without their permission, regardless of how that image was created. The laws in each country or territory need to be clarified, but, as a rule of thumb, Generative AI should restricted by an extension of existing laws such as defamation, exploitation, and privacy rights. These laws can come into play if the AI-generated content is harmful or infringing on someones rights.In addition, there are ongoing discussions about the need for new laws or regulations specifically addressing the unique issues raised by AI, such as the question of who can be held responsible for violations using AI-generated content. It is important to note that just because a generative piece of art or music is stated as being approved for commercial use, that does not imply that the training data used to build the model was licensed and respected all contributing artists appropriately.Generative AIThis fxpodcast is not sponsored, but is based on research done for the new Field Guide to Generative AI. fxguides Mike Seymour was commissioned by NVIDIA to unpack the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The Field Guide is free and can be downloaded here: Field Guide to Generative AI. In M&E, generative AI has proven itself a powerful tool for boosting productivity and creative exploration. But it is not a magic button that does everything. Its a companion, not a replacement. AI lacks the empathy, cultural intuition, and nuanced understanding of a storys uniqueness that only humans bring to the table. But when generative AI is paired with VFX artists and TDs, it can accelerate pipelines and unlock new creative opportunities.
    0 Comments ·0 Shares ·78 Views
  • Hands-on with the URSA Cine 12K LF + URSA Cine Immersive
    www.fxguide.com
    Ben Allan ACS CSI, test drives and reviews the URSA Cine 12K LF, with Oscar winner Bruce Beresford and also with our own fxcrew.Blackmagic Design refers to their much anticipated high-end camera as the first cine camera design. This camera is also the basis for the new Blackmagic URSA Cine Immersive, which the company announced today is now available to pre-order from Blackmagic Design.URSA Cine Immersive cameraThe new Blackmagic URSA Cine Immersive camera will be the worlds first commercial camera system designed to capture Apple Immersive Video for Apple Vision Pro (AVP), with deliveries starting in early 2025. DaVinci Resolve Studio will also be updated to support editing Apple Immersive Video early next year, offering professional filmmakers a comprehensive workflow for producing Apple Immersive Video for Apple Vision Pro. Rumours have been incorrectly claiming Apple is moving away from the AVP, this is clearly not the case. The AVP: Immersive Video format is a remarkable 180-degree media format that leverages ultra-high-resolution immersive video and Spatial Audio to place viewers in the center of the action.Blackmagic URSA Cine Immersive will feature a fixed, custom lens system pre-installedThe Blackmagic URSA Cine Immersive will feature a fixed, custom lens system pre-installed on the body, which is explicitly designed to capture Apple Immersive Video for AVP. The sensor can deliver 8160 x 7200 resolution per eye with pixel level synchronisation and an impressive 16 stops of dynamic range. Cinematographers will be able to shoot 90fps 3D immersive cinema content to a single file. The custom lens system is designed for URSA Cines large format image sensor with extremely accurate positional data thats read and stored at time of manufacturing. This immersive lens projection data which is calibrated and stored on device then travels through post production in the Blackmagic RAW file itself.Hands-on with Ben Allan, ACS CSI.Ben tested one of the first URSA Cine 12K LF camera.URSA Cine 12K LF : Cinema or Cine?BMDs has used the terms cinema and digital film camera for most of the cameras they have made. This a reflection of the fact that when they first started to produce cameras, they already had the prestige post-production software DaVinci Resolve in their stable, so a big part of the equation in starting to produce their own cameras was to fill the gap of an affordable, small camera which could produce images which were suitable for colour grading with the powerful tools already available in Resolve.Their original Cinema Camera was introduced in 2012 with an unconventional design and, crucially, recording very high-quality files in either ProRes or Cinema DNG RAW. It is probably hard to put this in the proper perspective now, but at the time, it sparked a little revolution in the industry where high-end recording formats were tightly tied to the most expensive cameras. Only a few years earlier, RED had started a similar revolution at a time when Super 35mm sized single sensors which could be used with existing cinema lenses, was the province of the most expensive digital cameras from Sony and Panavision. At the same time, RAW recording for moving pictures was essentially a pipe-dream.By showing that these things could be delivered to the market in a working camera system at somewhere near a 10th of the cost, RED catapulted the whole industry forward and forced the existing players to fast-track the new technologies into the families of high-end cameras were all familiar with today such as Sonys F-series, Panavisions RED-based DXLs, ARRIs ALEXAs and Canons Cinema Line.The other revolution which had also occurred recently was the advent of HD video recording in DSLRs with the introduction of the Canon 5D-II. While this suddenly gave people a low-cost way of recording cinematic images with shallow depth of field coming from a single large sensor, the 5D-II and the next few generations of video-capable DSLRs were limited by highly compressed 8-bit recording. The effects of this would sometimes only become apparent when the images were colour-graded and DCT blocking from the compression and banding from the 8-bit recording became difficult or even impossible to remove.BMDs choice to offer both 10-bit ProRes and 12-bit Cinema-DNG RAW recording removed the quality bottle-neck and allowed users with limited budgets or needing a light and compact camera to record in formats that met or exceeded the specifications of a 2K film scan, which was still the standard for cinema production at the time.What BMD did by releasing a camera that could match the file format quality of the high-end cameras with low or no compression and high bit depth and at a tiny percentage of the cost from even the original RED showed that these features neednt be kept for the top-shelf cameras alone, sparking the other manufacturers to allow these features to trickle down into their more affordable cameras as well.Since then, Blackmagic has evolved and expanded its range of cameras year after year and is now easily one of the most significant players in the professional motion image camera market.From the Pocket Cinema Cameras at the entry-level, to the various incarnations of the URSA Mini Pro platform, all of these cameras delivered varying degrees of film-like dynamic range combined with recording formats that provided the basis for intensive colour grading and VFX work. Since the DSLR revolution, there has been an explosion in the options for rigging cameras for cine style shooting, and all of these BMD cameras could and were extensively rigged in this way.For over a decade, BMD has been releasing cameras that record in cinema-friendly formats, optimised for high-end post-production requirements and routinely rigged for cinema-style shooting, so why call this new camera their first cine camera? I think this is their way of explaining succinctly that this is a camera designed from the ground up for film production style shooting.The Cine BenchmarkWhen we think of the modern motion picture film camera, the benchmark both in a practical sense and in the popular consciousness in the Panavision Panaflex, with its big white film magazine sitting on top, it is the very essence of what people both inside and outside the industry feel a movie camera should look.But a huge part of the success of the Panaflex since its introduction in the 1970s is that the camera itself was designed and evolved as part of a cohesive ecosystem that was modular, flexible and, most importantly reliable. In creating this, Panavision set expectations for crews and producers of what a professional camera system needed to be. This philosophy has flowed through in varying ways to all of the high-end digital camera systems used today. Take the ARRI ALEXA 35, for example, with a modular design that can be quickly and easily optimised for a wide range of shooting styles and requirements, has all the connections required for professional work, including multiple SDI outputs, power for accessories and wireless control.In this context, it starts to become very clear what BMD have done with the URSA Cine platform; they have designed a system that is driven by this cinema camera philosophy rather than, say their DSLR-styled Pocket cameras or the TV-inspired URSA Mini Pro range. Different design philosophies for different purposes.The URSA Cine 12K LF is the first camera to be released from the URSA Cine line ahead of the URSA Cine 17K with its 65mm film-sized image sensor and the URSA Cine Immersive stereoscopic camera being developed with Apple and optimised for capturing films in the 180 Immersive format for the Apple Vision Pro. While these other two cameras are much more niche, special-purpose tools, the Cine LF is very much a mainstream production camera system. An Operators CameraWhen it comes to actually using the Cine LF it becomes very clear what a mainstream system it is. It is a very operator-friendly camera that is well thought out. Although it is a little bigger and heavier than the URSA Mini Pro cameras, it is still significantly smaller and lighter than a full-sized ALEXA, which is of itself much smaller and lighter than something like a fully loaded Panaflex.The Cine LF is packaged in one of two kits, both well-kitted up and pretty much ready to shoot but with and without the EVF. I suspect the EVF kit will be by far the more popular, as the viewfinder is as good as any Ive ever used. It is sharp and clear, the contrast is exceptionally good, the colour rendition is incredibly accurate, and all in a very compact unit. The EVF connects to the camera via a single, locking USB-C cable which carries power, picture and control. Not only is this convenient, it allows the EVF to be thoroughly controlled from the cameras touchscreen menu. This is dramatically easier and quicker than the URSA Mini Pros EVF menu system. In addition to the EVF function buttons, there is even a record trigger on the EVF itself. In certain situations, this could be an extremely useful feature, particularly when the camera is wedged into a tight spot.The EVF is mounted using a system that attaches quickly to the top handle and allows the viewfinder to be positioned with a high degree of freedom. The kit also includes a viewfinder extension mount which is very quick and easy to attach and remove and can be used with or without an eyepiece leveller. With all of these elements, it is easy to position the EVF wherever the operator needs it and then firmly lock it in place. The way all these pieces fit together is solid, smooth and seamless. In this respect, it is instantly reminiscent of the Panaflex philosophy, it doesnt force you to use the camera in a particular way, it just allows you to make the choices, and the system supports that.The kit also includes a hard case with custom foam. This is also in keeping with the traditions of high-end professional camera systems from people like ARRI. I have a URSA Mini Pro case by SKB that allows the camera to be packed with the EVF attached, and I like that. However, the decision to have the camera packed with the EVF and its mounting system removed makes the whole case much neater and more petite than would otherwise be possible. In fact, the Cine LF EVF kit is substantially smaller than my case for the URSA Mini Pro, despite the bigger camera. In addition to being more consistent with film camera standards, the key to making this work is how quick and easy it is to attach the EVF once the camera is out of the case. The top handle and baseplate remain on the camera when packed.The baseplate with both kits is also an excellent piece of gear. Like the URSA Mini baseplate, it offers a lot of freedom where it is mounted to the underside of the camera body, but the Cine baseplate demonstrates how much this system is designed for film-style shooting. While the URSA Mini Baseplate is a broadcast-style VCT system that works well for getting a fully built camera quickly on and off the tripod it doesnt offer much in the way of rebalancing when the camera configuration changes substantially. The URSA Cine baseplate uses the ARRI dovetail system which is now almost ubiquitous for high-end production cameras. Although the kit doesnt come with the dovetail plate, it connects easily to both the ARRI ones and third-party plates, and the locking mechanism allows it to be partly unlocked for positioning with a safety catch to fully unlock for putting the camera on and off the plate.The baseplate also has a thick and comfortable shoulder pad built-in and mounting for both 15mm LWS and 19mm Studio rods.Together all of these features of the EVF and the baseplate mean that it would be quick and easy to reconfigure the Cine LF from working with a big lens like the Angenieux 24-290mm with a 66 matte box and the viewfinder extension and in moments, have the camera with a lightweight prime lens, clamp on matte box and ready for a handheld shot. This is the sort of flexibility crews expect from a high-end cine-style camera system, and the Cine LF delivers it comfortably.The kit also comes with both PL and locking EF lens mounts which can be changed with a 3mm hex key. These two options will cover a lot of users needs, but there is also an LPL for those who want to use lenses with ARRIs new standard mount, such as the Signature Primes and Zooms and also a Hasselblad mount for using their famous large format lenses.MonitoringMonitoring options is one area where the Cine LF is in a class of its own. In addition to the EVF, there are two built-in 5 HDR touchscreen monitors, which are both large and very clear, with 1500 nits of brightness and very good contrast, and with FHD resolution matching the EVF. On the operator side is a fold-out display, and when it is folded in, there is a small status display showing all the cameras key settings. This is similar to the one on the URSA Mini Pro cameras but with a colour screen. Unlike the URSA Mini fold-out screens, this one can rotate right around so that the monitor faces out while folded back into the camera body. I can imagine this being very convenient when the operator uses the EVF with the extension mount, and the focus puller could be working directly off the 5 display folded back in. The operator side monitor can even rotate around so that the subject can see themselves, potentially useful for giving an actor a quick look at the framing, or for total overkill selfies!On the right hand side, the second 5 monitor is rugged mounted to the side of the camera body. Like the left-side flip-out screen it is also a touch screen, and the whole menu system can be accessed from both screens. Either screen can be configured for an assistant, operator or director with a wide array of options for as little or as much information as required. The right side screen also has a row of physical buttons below it to control the key features and switch between modes.The first shoot I used the Cine LF on was with Bruce Beresford (director of Best Picture Oscar winner Driving Miss Daisy), shooting scenery for his new film Overture. Bruce loved that simply standing next to the camera allowed him to clearly see what was being filmed without waiting for additional equipment to be added to the camera. I can imagine many directors becoming quite used to this feature, allowing them to get away from the video village and be near the action whenever needed.The camera body also has two independent 12G SDI outputs. In the menu system, there are separate controls for both SDI outputs, both LCDs and the EVF, so you can have any combination of LUT, overlays, frame lines, focus and exposure tools etc., on each one.For example, it would be easy to have the LUT, overlays & frame lines on in the EVF for the operator, LUT and focus tools on the right side monitor for the focus puller, false colour & histogram on the left side monitor for the Director Of Photography to check, LUT & frame lines on one SDI out for the director and a clean Log feed on the other for the DIT or any other combination. This flexibility allows the camera to function in a wide range of crew structures and shooting styles efficiently. Because of this, it would be pretty feasible to effectively drop the camera into most existing mainstream production systems with minimal adaptation around the camera.Ironically, the main thing that might obscure how much of a mainstream tool the Cine LF is might be the 12K sensor. The Cine LF, like the URSA Mini Pro 12K, the combination of the RGBW colour filter array and the BRAW recording format means that the RAW recording resolution is not tied to the area of the sensor being used the way it is for virtually every other RAW capable camera.Resolution & Recording FormatsThis might sound counter-intuitive because of how RAW has been sold to us from the start. The concept that RAW is simply taking the raw, ie. unprocessed digitised data of each pixel from the image sensor has always been a vast oversimplification that has served a useful purpose in allowing people to understand the usefulness of RAW. The only production camera Im aware of that records in this way is the Achtel 97. Even then, the files need to be converted to a more conventional format for post-production. The vast amount of data involved in doing this is truly mind-boggling.What the RAW video formats were all familiar with do is more of an approximation of this which is to use efficiencies from saving the de-Bayer process to reduce the amount of data before compression is applied and use some of that space saving to record high bit depth data for each photosite allowing it to retain more of the tonal subtleties from the sensor and therefore being able to apply minimal processing to the recording. The effect of all this is that it gives so much flexibility in post that it generally functions in practice as if you had all of the unprocessed raw data from the sensor. RED set the standard for this with their REDCODE RAW format, and most manufacturers have followed this in some form or another, such as ARRIs uncompressed ARRIRAW. With all of these formats, recording a lower resolution than the full sensor res means cropping in on the sensor. The RED ONE, for example, recorded 4K across the Super 35mm sensor, but to record 2K meant cropping down to approximately Super 16mm.Ben Allan DOP on set recently with the fxcrewBlackmagic RAW or BRAW achieves a similar result by different means and was designed with the RGBW sensor array in mind. Unlike other RAW systems where the recording is tied to the Bayer pattern of photosites, BRAW does a partial de-mosaic before the recording but still allows for all the normal RAW controls in post, such as white balance, ISO etc.One of the big advantages of this is that the recording resolution can be decoupled from the individual photosites meaning that the BMDs BRAW-capable cameras can record lower resolutions while still using the full dimensions of the sensor.While the Cine LF has the advertised 12K of photosites across the sensor, it doesnt have to record in 12K to get all the other advantages of the large sensor. In 4K, you still get the VistaVision depth of field as you would expect from something like an ARRI ALEXA LF but also the other advantages of a larger sensor that are often overlooked. Many lenses that cover the full frame of the 24mm x 36mm sensor are optimised for that image circle, so they will maximise performance on that frame and issues like sharpness and chromatic aberration will not be as good when significantly cropped in. There are also advantages to oversampling an image, including smoothness while retaining image detail and less likelihood of developing moire patterns. On that subject, the Cine LF also has a built-in OLPF filter in front of the image sensor. This Optical Low Pass Filter removes details below the resolution limit of the sensor, resulting in even less risk of moire and digital aliasing.As a 4K or 8K camera, the Cine LF excels. 4K RAW is the minimum resolution that the camera can record internally, but with the BRAW compression set to a conservative 5:1 the resulting data rate is a very manageable 81 MB/s. The equivalent resolution in ProRes HQ (although in 10-bit, 4:2:2) is 117 MB/s.For many productions, 4K from this camera will be more than adequate. The pictures are smooth and film-like and very malleable in Resolve. The workflow is easy, and you can very comfortably set the camera to 4K and pretend it is just a beautifully oversampled 4K camera. Im conscious that this sort of sensor oversampling is how the original ARRI ALEXA built its reputation with 2.8K of photosites coming down to stunning 2K recordings.Some productions will shoot 4K for the most part but then switch to 8K or 12K for VFX work, somewhat like the way big VFX films used to shoot Super 35mm, Academy or Anamorphic for main unit and switch to VistaVision for VFX shots (think Star Wars, Jurassic Park & Titanic). The beauty of this is that unlike switching to VistaVision, there are no issues with physically swapping cameras or with matching lenses or even angle of view- everything remains the same but with a quick change in the menu system, you have a 4K, 8K or 12K camera.If youre expecting an eye-poppingly sharp 12K from this camera, you may want to be adjusting the settings in Resolve because the camera is optimised for aesthetically pleasing images rather than in-you-face sharpness. While BMD may have taken a bit of a PR hit with the Super-35 12K because people were expecting a big impact 12K zing, Im glad that they have stayed the course and kept the focus on beautiful images. Resolution, image detail and sharpness are related but different issues and they are ones that every manufacturer has to make decisions about in every cameras design. That balancing act has landed in a real sweet spot with the Cine LF and the effects are most notable on faces. The images are simultaneously detailed and gentle.The other significant recording format is the 9K mode, which does a Super-35 crop. This is a great option that allows for the use of any of the vast array of beautiful modern and vintage lenses designed for the Super-35 frame. Things like the classic Zeiss Super Speeds or Panavision Primos spring to mind.In each of the recording resolutions you have the same five options for aspect ratio.Open Gate is a 32 or 1.5:1 image that uses the full dimensions of the sensor. While it isnt a common delivery format, there are a number of reasons to use this recording option. The one delivery format that does use something very close to this is IMAX and I believe this camera will prove itself to be a superb choice for IMAX capture. But aside from that, it is very useful to shoot in open gate to capture additional image above and below a widescreen frame. This concept originated with films and TV shows framing the widescreen Super 35, but without masking the frame in the camera, which is literally an open film gate. This creates a shoot off which allows for either reframing or stabilisation.The 16:9 mode uses the entire width of the sensor and crops the height to get the correct ratio. Its also worth noting that the 16:9 4K mode is 169 based on the DCI cinema width of 4096 and not UHD 4K at 3840 across and is 2304 pixels high to get the 16:9 rather than the UHD 2160. This is another reflection of the design philosophy of making a cine camera rather than a TV camera.17:9 is a full DCI cinema frame at 1.89:1 which is also not a standard delivery format but the container standard for digital cinema. This would need to be cropped down to either 2.4:1 Scope or 1.85:1 for cinema release or 16:9 for TV, and any of these frame lines can be loaded for monitoring as with the Open Gate mode.2.4:1 is the standard Scope ratio for cinema and doesnt have any shoot-off for that format. In 8K & 4K this format delivers the highest off-speed frame rates at 224 fps compared to 144 in 8K or 4K Open Gate. A fairly trivial detail, but 2.4:1 in 4K is actually the only standard delivery format that the camera offers directly without any resizing or cropping in post.The final aspect ratio is 6:5 or 1.2:1 is explicitly designed for use with anamorphic lenses. With a traditional 2x anamorphic, the 6:5 ratio closely matches the anamorphic frame on film and produces a 2.4:1 image when de-squeezed. In 12K, 8K or 4K this would require anamorphic lenses built to cover the large format frame, but in 9K 6:5 the crop perfectly mimics a 35mm anamorphic negative area, making it possible to use any traditional 35mm anamorphics in the same way as they would work on film.Anamorphic de-squeeze can be applied to any or all of the monitoring outputs and displays, including the EVF, in any of the recording formats and with any of the common anamorphic ratios 1.3x, 1.5x, 1.6x, 1.66x, 1.8x and 2x. Combined with the different recording aspect ratios this could cater for a wide range of different workflows, creative options and even special venue formats. For example, 2x anamorphic on a 2.4:1 frame would produce a 4.8:1 image that would have been a superb option for the special venue production I shot and produced for Sydneys iconic Taronga Zoo a few years ago, which would have negated the need for the three camera panoramic array we had to build at the time.The other thing worth noting about the twenty different recording resolutions is that only one matches a standard delivery format pixel for pixel. I doubt that this is any reflection on that particular format but a coincidence that it coincided with the overall logic of the format options. This logic also goes back to the idea of this as a cine camera. Like a professional film camera shooting on film negative, the idea is not to create a finished image in camera but to record a very high-quality digital neg, which is expected to have processing in post-production before creating a deliverable image.While nothing is stopping you from using this camera for fast turnaround work with minimal post production, there are so many choices that have been made in the design of the camera which are not focussed on the ability. This again comes back to the concept of a cine camera optimised to function in the ways that film crews and productions like or need to work.The Media ModuleThe combination of 12K RAW and high frame rates created another challenge in the form of very high data rates. Although BRAW is a very efficient codec BMD needed a recording solution that really wasnt met by any of the off-the-shelf options.Their solution is the Media Module which is specially designed, removable on-board storage. While it will also be possible to replace the Media Module with one that contains CFexpress slots, the Media Module M2 comes with the camera and has 8TB of extremely fast storage. Even in Open Gate, 12K at the lowest compression settings, this still allows nearly 2 hours of recording. There are very limited scenarios where it would be necessary to shoot more than that amount of footage at that quality level in a single day. In those situations, it is of course, possible to have multiple Media Modules and swap them out as you would a memory card.For most productions, though, it will be easy to comfortably get through each days shooting on the single module. There is a Media Module Dock which allows 3 Modules to be connected simultaneously to a computer, but for many users, the simpler solution will be the 10G ethernet connection on the back of the camera. Either way, downloading the days footage will happen about as fast as the receiving drive can handle as the Media Module and the ethernet will outrun most drive arrays.The PicturesThe resulting pictures are, of course, what its all about, and the pictures from the Cine LF are nothing short of stunning. It has the same silkiness that the pictures from the original 12K have but with the large format look. The larger photosites and the fact that lenses arent working as hard seem to also contribute to the fact that the pictures look sharp without harshness.That lack of harshness also contributes to the film-like look of the images. There are so many techniques to degrade digital to make it more like film, but this is digital looking like film at its best. Detailed, clean, gentle on skin tones and with a beautiful balance between latitude and contrast.While there are several cameras which this one competes with, there is really nothing on the market that is comparable in terms of the combination of functionality, workflow and look.The AVP VersionThe Blackmagic URSA Cine camera platform is the basis of multiple models with differentfeatures for the high end cinema industry. All models are built with a robust magnesiumalloy chassis and lightweight carbon fiber polycarbonate composite skin to help filmmakersmove quickly on set. Blackmagic URSA Cine Immersive is available to pre order now direct from Blackmagic Design for US$29,995. Delivery will start in late Q1 2025.Shipping Q1 2025Customers get 12GSDI out, 10G Ethernet, USB-C, XLR audio, and more.An 8-pin Lemo power connector at the back of the camera works with 24V and 12V power supplies, making it easy to use the camera with existing power supplies, batteries, and accessories. Blackmagic URSA Cine Immersive comes with a massive 250W power supply and B mount battery plate, so customers can use a wide range of high voltage batteries from manufacturers such as IDX, Blueshape, Core SWX, BEBOB, and more.Blackmagic URSA Cine Immersive comes with 8TB of high performance network storage built in, which records directly to the included Blackmagic Media Module, and can be synced to Blackmagic Cloud and DaVinci Resolve media bins in real time. This means customers can capture over 2 hours of Blackmagic RAW in 8K stereoscopic 3D immersive, and editors can work on shots from remote locations worldwide as the shoot is happening. The new Blackmagic RAW Immersive file format is designed to make it simple to work with immersive video within a post production workflow, and includes support for Blackmagic global media sync.Blackmagic RAW files store camera metadata, lens data, white balance, digital slate information and custom LUTs to ensure consistency of image on set and through post production. Blackmagic URSA Cine Immersive is the first commercial, digital film camera with ultra fast high capability Cloud Store technology built in. The high speed storage lets customers record at the highest resolutions and frame rates for hours and access their files directly over high speed 10G Ethernet. The camera also supports creating a small H.264 proxy file, in addition to the camera original media when recording. This means the small proxy file can be uploaded to Blackmagic Cloud in seconds, so media is available back at the studio in real time.Blackmagic URSA Cine Immersive Features Dual custom lenses for shooting Apple Immersive Video for Apple Vision Pro. Dual 8160 x 7200 (58.7 Megapixel) sensors for stereoscopic 3D immersive image capture. Massive 16 stops of dynamic range. Lightweight, robust camera body with industry standard connections. Generation 5 Color Science with new film curve. Each sensor supports 90 fps at 8K captured to a single Blackmagic RAW file. Includes high performance Blackmagic Media Module 8TB for recording. High speed Wi-Fi, 10G Ethernet or mobile data for network connections. Includes DaVinci Resolve Studio for post production.Apple is looking to build a community of AVP projectsSubmergedLast month, Apple debutedSubmerged, the critically acclaimed immersive short film written and directed by Academy Award-winning filmmaker Edward Berger. New episodes ofAdventureandWild Lifewill premiere in December, followed by new episodes ofBoundless,ElevatedandRed Bull: Big-Wave Surfingin 2025.Submerged BTS Note the film was not shot on the BMC, but is now available to watch on AVP
    0 Comments ·0 Shares ·105 Views
  • NVIDIAs Simon Yuan: Facing the Future of AI
    www.fxguide.com
    In this Fxpodcast episode, we explore a topic that captures the imagination and attention of creatives worldwide: building your own generative AI tools and pipelines. Simon Yuen is director of graphics and AI at NVIDIA where he leads the digital human efforts to develop new character technology and deep learning-based solutions that allow new and more efficient ways of creating high-quality digital characters. Before NVIDIA, Simon spent more than 21 years in the visual effects industry, in both the art and technology sides of the problem at many studios, including Method Studios, Digital Domain, Sony Pictures Imageworks, DreamWorks, Blizzard Entertainment, and others, building teams and technologies that push the envelope of photorealistic digital character creation.Generative AI.This fxpodcast is not sponsored, but is based on research done for the new Field Guide to Generative AI. fxguides Mike Seymour was commissioned by NVIDIA to unpack the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The Field Guide is free and can be downloaded here: Field Guide to Generative AI. In M&E, generative AI has proven itself a powerful tool for boosting productivity and creative exploration. But it is not a magic button that does everything. Its a companion, not a replacement. AI lacks the empathy, cultural intuition, and nuanced understanding of a storys uniqueness that only humans bring to the table. But when generative AI is paired with VFX artists and TDs, it can accelerate pipelines and unlock new creative opportunities.Digital Human powered by NVIDIAThe Core of Generative AI: Foundation Models & NIMs.NVIDIA Inference Microservices (NIMs) and foundation models are the building blocks of alot of modern AI workflows, and they are at the heart of many new generative AI solutions.Foundation models are large-scale, pre-trained neural networks that can tackle broad categories of problems. Think of them as AI generalists, adaptable to specific tasks through fine-tuning with additional data. For example, you might start with a foundation model capable of understanding natural language (LLM) and fine-tune it to craft a conversational agent that your facility can use to help on board new employees.While building these models from scratch is resource-intensive and time-consuming, fine-tuning them for your specific application is relatively straightforwardand NVIDIA has made this process quite accessible.NVIDIA NIMs.NIMS or microservices simplify the deployment of foundation models. Whether on the cloud, in a data center, or even on the desktop. NIMs streamline the process but also ensuring data security. They make it easy to create tailored generative AI solutions for facilitys or project needs. For instance, NVIDIAs latest OpenUSD NIMs allow developers to integrate generative AI copilots into USD workflows, enhancing efficiency in 3D content creation.JamesBringing Digital Humans to Life with NVIDIA ACEOne of the most interesting applications of NIMs is in crafting lifelike digital humans. NVIDIAs ACE (Avatar Cloud Engine) exemplifies this capability. With ACE, developers and TDs can design interactive digital humans and avatars that respond in real-time with authentic animations, speech, and emotions.A standout example is James, a virtual assistant powered by NVIDIA ACE. He is an interactive digital human, a communications tool powered by a selected knowledge base, ACE and animated by NVIDIAs Audio2Face. James showcases how generative AI and digital human technologies converge, providing tools for telepresence, interactive storytelling, or even live character performances. This is more than just a visual upgradeits a way to enhance emotional connections in digital media.Generative AI: Empowering Creativity, Not Replacing ItAs we as an industry adopt and explore these tools, its essential to keep a balanced perspective. Generative AI isnt here to replace human creativity we need to use it to amplify it. Ai can enable teams to iterate faster, experiment, and focus on the storytelling that truly resonate with audiences. Central to this is respecting artists rights, having providence of training data and maintaining data security.From fine-tuning a foundation model to integrating NIM-powered workflows, building your own generative AI workflow involves leveraging technology to empower your project. With tools like NVIDIAs foundation models and ACE, the possibilities are immense, but the responsibility to use them thoughtfully is equally crucial.
    0 Comments ·0 Shares ·130 Views
  • Zap Andersson: Exploring the Intersection of AI and Rendering
    www.fxguide.com
    Hkan Zap Andersson is a senior technical 3D expert and a widely recognized name in the world of rendering and shader development. Zap has long been at the forefront of technical innovation in computer graphics and VFX.Known for his contributions to tools like OSL and MaterialX at Autodesk, Zap has recently ventured into the rapidly evolving domain of AI video tools, leveraging Googles powerful Notebook LM to push the boundaries of creative storytelling. His exploration resulted in UNREAL MYSTERIES, a bizarre series designed both to challenge his skills and test the capabilities of these new AI technologies.Zap joins us on the fxpodcast to delve into his creative process, share insights on the tools he used, and discuss the lessons he learned from working with cutting-edge AI systems. Below, youll find an in-depth making of breakdown that details how Zap combined his expertise with AI-powered workflows. And because no experiment is complete without a few surprises, weve included an AI blooper reel at the bottom of this story to highlight the quirks and challenges of working with this still-maturing technology.Listen to this weeks fxpodcast as we unpack the fascinating and odd world of artistry, technology, and innovation that is AI video content.Zap commented on this making of video that, this one gets INSANELY meta, the hosts get crazily introspective and its quite chilling. quite self awareIn the podcast the guys mention a series of Video tools, here is a set of links to those programs, (your mileage may vary), enjoy:MiniMax (although maybe its actual name is Hailuo, nobody truly knows):Kling:Luma Labs Dream Machine:RunwayML Gen-3: https://app.runwayml.com/dashboardUpscaling:Krea:Avatars:Heygen:Voices:NotebookLMElevenlabsMusic & Sound:Music: SumoSound Effects: ElevenlabsGoof RealZaps backgroundZap began his journey with a degree in Electronics but has been immersed in programming throughout his life. His first encounter with a computer dates back to 1979, working with an HP2000E mainframe, followed by the Swedish ABC80 computer, which he enthusiastically modified by building a custom graphics card and developing a suite of commercial games. For many years, Zap worked in the CAD industry, specializing in mechanical design software. However, his true passion has always been 3D graphics and rendering.Pursuing this interest, he developed his own ray tracer and 3D modeling software during his spare time. Zaps career took a decisive turn when he started creating advanced shaders for Mental Images, which NVIDIA later acquired. Today, he is a part of the Autodesk 3ds Max Rendering team, focusing on technologies such as OSL, shaders, MaterialX, and other cutting-edge rendering tools.Zaps expertise includes shader development, rendering algorithms, UI design, and fun experimental communication skills, making him a versatile and highly skilled professional in 3D graphics and rendering and a good friend of fxguide and the podcast.Zap on social:
    0 Comments ·0 Shares ·132 Views
  • Adobe and GenAI
    www.fxguide.com
    In this weeks fxpodcast, we sit down with Alexandra Castin, head of Adobes Firefly generative AI team, to discuss the evolution of generative AI, Adobes unique approach to ethical content creation, and the groundbreaking work behind Firefly.This podcast is not sponsored, but is based on research done for thenew Field Guide to Generative AI. fxguides Mike Seymour was commissioned by NVIDIA to unpack the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The Field Guide is free and can be downloaded here: Field Guide to Generative AI. From GANs to Multimodal ModelsAs you will hear in the fxpodcast, Adobes journey with generative AI began in 2020, when Adobe introduced Neural Filters in Photoshop. At the time, the focus was on using GANs (Generative Adversarial Networks) to generate new pixels for creative edits. Today, Adobes scope has expanded dramatically to include Diffusion Models, Transformers, and cutting-edge architectures.For me, the core principle of generative AI hasnt changedits about the computer understanding user intent and synthesizing responses from training data, Alexandra explains. The models have evolved to not only understand and generate text but also create and enhance images, videos, audio, and even 3D assets.Adobe is uniquely positioned in this space, as its product portfolio spans nearly every creative medium. With Firefly, theyve embraced multimodal generative AI to create tools that cater to text, images, audio, video, and beyond.All these images are generated with Adobe Firefly from text alone, with no other input.FireflyFirefly is Adobes flagship generative AI platform, now integrated into industry-leading tools like Photoshop, Illustrator, and Premiere Pro. According to Alexandra, Fireflys strength lies in its training data: At its core is a set of high-quality data we have the right to train on. Thats what sets Firefly apartits both powerful and safe for commercial use.One standout feature is Photoshops Generative Fill, which Alexandra describes as a co-pilot for creatives. Users can guide Photoshop with text prompts, allowing Firefly to generate precise visual results. The technology has democratized generative AI, making it accessible and practical for VFX professionals and enthusiasts alike.Ensuring Ethical AIAdobe has been a staple of the creative community for over four decades, and with Firefly, theyve prioritized respecting artists rights and intellectual property. Alexandra points to Adobes commitment to clean training material as foundational to Fireflys strategy.Weve implemented guide rails to guarantee that Firefly wont generate recognizable characters, trademarks, or logos, she says. This safeguard ensures that users work remains free from unintentional infringementa critical consideration in the commercial space.The C2PA Initiative: Building Trust in MediaOne of Adobes most significant contributions to the generative AI landscape is its leadership in the Coalition for Content Provenance and Authenticity (C2PA). Launched in 2018 from Adobe Research, the initiative addresses the growing concern around misinformation and content authenticity.Think of it like a nutrition label for digital media, Alexandra explains. The goal is to provide transparency about how a piece of content was created, so users can make informed decisions about what theyre consuming.The initiative has attracted over 3,000 organizations, including camera manufacturers, media companies, and AI model creators. By embedding content credentials into outputs, the C2PA aims to establish a universal standard for verifying authenticitya crucial step as generative content continues to explode.Looking AheadAs the generative AI landscape evolves, Firefly represents Adobes commitment to balancing innovation with ethical responsibility. By building tools that empower creators while protecting intellectual property, Adobe is aiming to build a future where generative AI becomes an indispensable part of creative workflows.Join us on this weeks fxpodcast as we dive deeper with Alexandra Castin into the future of generative AI, Adobes strategic plans, and the lessons learned along the way.
    0 Comments ·0 Shares ·147 Views
  • VFXShow 289: HERE
    www.fxguide.com
    This week, the team reviews the film HERE by director Robert Zemeckis. Earlier, we spoke to visual effects supervisor Kevin Baillie for the FXpodcast. In that earlier FXPodcast, Kevin discussed the innovative approaches used on set and the work of Metaphysics on de-aging. Starring Tom Hanks, Robin Wright, Paul Bettany, and Kelly Reilly, Here is a poignant exploration of love, loss, and the passage of time.The filmmaking techniques behind this film are undeniably groundbreaking, but on this weeks episode of The VFX Show, the panel finds itself deeply divided over the narrative and plot. One of our hosts, in particular, holds a strikingly strong opinion, sparking a lively debate that sets this discussion apart from most of our other shows. Few films have polarized the panel quite like this one. Dont miss the spirited conversation on podcast.Please note: This podcast was recorded before the interview with Kevin Ballie (fxpodcast).The Suburban Dads this week are:Matt Wallin * @mattwallin www.mattwallin.com.Follow Matt on Mastodon: @[emailprotected]Jason Diamond @jasondiamond www.thediamondbros.comMike Seymour @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments ·0 Shares ·175 Views
  • Generative AI in media and entertainment
    www.fxguide.com
    SimulonIn this new Field Guide to Generative AI, fxguides Mike Seymour, working with NVIDIA, unpacks the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The field guide draws on interviews with experts atPlus, expertise from visual effects researchers at Wt FX & Pixar.This comprehensive guide is a valuable resource for creatives, technologists, and producers looking to harness the transformative power of AI in a respectful and appropriate fashion.Generative AI in Media and Entertainment, a New Creative Era: Field GuideClick here to download the field guide (free).Generative AI has become one of the most transformative technologies in media and entertainment, offering tools that dont merely enhance workflows but fundamentally change how creative professionals approach their craft. This class of AI, capable of creating entirely new content from images and videos to scripts and 3D assetsrepresents a paradigm shift in storytelling and production.As the field guide notes, this revolution stems from the nexus of new machine learning approaches, foundational models, and advanced NVIDIA accelerated computing, all combined with impressive advances in neural networks and data science.NVIDIAFrom enhancement AI to creation GenAIWhile traditional AI, such as Pixars machine learning denoiser in RenderMan, has been used to optimize production pipelines, generative AI takes a step further by creating original outputs. Dylan Sisson of Pixar notes that their denoiser has transformed our entire production pipeline and was first used on Toy Story 4, touching every pixel you see in our films.However, generative AIs ability to infer new results from vast data sets opens doors to new innovations, building and expanding peoples empathy and skills. Naturally it also has raised concerns, about artists rights, providence of training data and possible job losses as production pipelines incorporate this new technology. The challenge is to ethically incorporate these new technologies and the field guide aims to show companies that have been doing just that.RunwayBreakthrough applicationsGenerative models, including GANs (Generative Adversarial Networks), diffusion-based approaches, and transformers, underpin these advancements in generative AI. These technologies are not well understood by many producers and clients, yet companies that dont explore how to use them could well be at an enormous disadvantage.Generative AI tools like Runway Gen-3 are redefining how cinematic videos are created, offering functionalities such as text-to-video and image-to-video generation with advanced camera controls. From the beginning, we built Gen-3 with the idea of embedding knowledge of those words in the way the model was trained, explains Cristbal Valenzuela, CEO of Runway. This allows directors and artists to guide outputs with industry-specific terms like 50mm lens or tracking shot.Similarly, Adobe Firefly integrates generative AI across its ecosystem, allowing users to tell Photoshop what they want and having it comply through generative fill capabilities. Fireflys ethical training practices ensure that it only uses datasets that are licensed or within legal frameworks, guaranteeing safety for commercial use.New companies like Simulon are also leveraging generative AI to streamline 3D integration and visual effects workflows. According to Simulon co-founder Divesh Naidoo, Were solving a fragmented, multi-skill/multi-tool workflow that is currently very painful, with a steep learning curve, and streamlining it into one cohesive experience. By reducing hours of work to minutes, Simulon allows for rapid integration of CGI into handheld mobile footage, enhancing creative agility for smaller teams.BriaEthical frameworks and creative controlThe rapid adoption of generative AI has raised critical concerns around ethics, intellectual property, and creative control. The industry has made strides in addressing these issues. Adobe Firefly and Getty Images stand out for their transparent practices. Rather than ask if one has the rights to use a GenAI image, the better question is, can I use these images commercially, and what level of legal protection are you offering me if I do? asks Gettys Grant Frarhall. Getty provides full legal indemnification for its customers, ensuring ethical use of its proprietary training sets.Synthesia, which creates AI-driven video presenters, has similarly embedded an ethical AI framework into its operations, adhering to the ISO Standard 42001. Co-founder Alexandru Voica emphasizes, We use generative AI to create these avatars the diffusion model adjusts the avatars performance, the facial movements, the lip sync, and eyebrowseverything to do with the face muscles. This balance of automation and user control ensures that artists remain at the center of the creative process.Wonder StudiosTraining data and provenanceThe quality and source of training data remain pivotal. As noted in the field guide, It can sometimes be wrongly assumed that in every instance, more data is goodany data, just more of it. Actually, there is a real skill in curating training data. Companies like NVIDIA and Adobe use carefully curated datasets to mitigate bias and ensure accurate results. For instance, NVIDIAs Omniverse Replicator generates synthetic data to simulate real-world environments, offering physically accurate 3D objects with accurate physical properties for training AI systems, and it fully trained appropriately.This attention to data provenance extends to protecting artists rights. Getty Images compensates contributors whose work is included in training sets, ensuring ethical collaboration between creators and AI developers.BriaExpanding possibilitiesGenerative AI is not a one-button-press solution but a dynamic toolset that empowers artists to innovate while retaining creative control. As highlighted in the guide, Empathy cannot be replaced; knowing and understanding the zeitgeist or navigating the subtle cultural and social dynamics of our times cannot be gathered from just training data. These things come from people.However, when used responsibly, generative AI accelerates production timelines, democratizes access to high-quality tools, and inspires new artistic directions. Tools like Wonder Studio automate animation workflows while preserving user control, and platforms like Shutterstocks 3D asset generators provide adaptive, ethically trained models for creative professionals.Adobe FireflyThe future of generative AIThe industry is just beginning to explore the full potential of generative AI. Companies like NVIDIA are leading the charge with solutions like the Avatar Cloud Engine (ACE), which integrates tools for real-time digital human generation. At the heart of ACE is a set of orchestrated NIM Microservices that work together, explains Simon Yuen, NVIDIAs Senior Director of Digital Human Technology. These tools enable the creation of lifelike avatars and interactive characters that can transform entertainment, education, and beyond.As generative AI continues to evolve, it offers immense promise for creators while raising essential questions about ethics and rights. With careful integration and a commitment to transparency, the technology has the potential to redefine the boundaries of creativity in media and entertainment.
    0 Comments ·0 Shares ·192 Views
  • A deep dive into the filmmaking of Here with Kevin Baillie
    www.fxguide.com
    The film Heretakes place in a single living room, with a static camera, but the film is anything but simple. The film remains faithful to the original graphic novel by Richard McGuire on which it is based, Tom Hanks and Robin Wright star in a tale of love, loss, and life, along with Paul Bettany and Kelly Reilly.Robert Zemeckis directing the filmRobert Zemeckis directed the film. The cinematography was by Don Burgess, and every shot in the film is a VFX shot. On the fxpodcast, VFX supervisor and second unit director Kevin Baillie discusses the complex challenges of filming, editing, and particularly de-aging the well-known cast members to play their characters throughout their adult lifespans.A monitor showing the identity detection that went into making sure that each actors younger real-time likeness was swapped onto them, and only them.DeAgingGiven the quantity and emotional nature of the performances, and the vast range of years involved, it would have been impossible to use traditional CGI methods and equally too hard to do with traditional makeup. The creative team decided that AI had just advanced enough to serve as a VFX tool, and its use was crucial to getting the film greenlit. Baillie invited Metaphycis to do a screen test for the project in 2022, recreating a young Tom Hanks, reminiscent of his appearance in Big, while maintaining the emotional integrity of his contemporary performance. A team of artists used custom neural networks to test Tom Hanks D-Age to His 20s. That gave the studio and our filmmaking team confidence that the film could be made. Interestingly, as Baillie discusses in the fxpodcast, body doubles were also tested but did not work nearly as well as the original actors.Tests of face swapping by Metaphysics. Early test of methods for de-aging Tom based on various training datasets:https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_ageEvolutionOptions.mp4Neural render output test clip:https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_WIP.mp4Final comp test clip: (The result of test for de-aging Tom that helped green-light the film)https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_Final.mp4While the neural network models used for the outputs generated remarkable photoreal results, but they still required skilled compositing to match, especially on dramatic head turns. Metaphysic artists enhanced to AI to hold up to the films cinematic 4K standards. Metaphysics also developed new tools for actor eyeline control and other key crafting techniques. Additionally, multiple models were trained for each actor to meet the diverse needs of the film; Hanks is portrayed at five different ages, Wright at four ages, and Bettany and Riley at two ages each. Achieving this through traditional computer graphics techniques involving 3D modeling, rendering, and facial capture, would have been impossible given the scale and quality required forHere and the budget for so much on-screen VFX. The film has over 53 minutes of complete face replacement work, done primarily by Metaphysics, led by Metaphysics VFX Supervisor, Jo Plaete. Metaphysics proprietary process involves training a neural network model on a reference input, in this case, footage and images of a younger Hanks, with artist refinement of the results until the model is ready for production. From there, an actor or performer can drive the model, both live on set and in a higher quality version in Post. The results are exceptional and well beyond what traditional approaches have achieved.On Set live preview: Tom de-aged as visualized LIVE on set (right image) vs the raw camera feed (left image)For principal photography, the team needed a way to ensure that the age of the actors body motion matched the scripted age of their on-screen characters. To help solve this, the team deployed a real-time face-swapping pipeline in parallel on set, one monitor showing the raw camera feed and the other the actors visualized in their 20s (with about a six-frame delay). These visuals acted as a tool for the director and the actors to craft performances. As you can hear in the podcast it also allowed a lot more collaboration with other departments such as hair and makeup, and costume.The final result was a mix of multiple AI neural renders and classic nuke compositing. The result is a progression of the actors through their years, designed to be invisible to audiences.Robin with old-age makeup, compared with synthesized images of her at her older age, which were used to improve the makeup using similar methods to the de-aging done in the rest of the filmIn addition to de-aging, similar approaches were used to improve the elaborate old-age prosthetics worn by Robin Wright at the end of the film. , This allowed enhanced skin translucency and fine wrinkles, etc. De-aging makeup is extremely difficult and often characterised as the hardest special effects makeup to attempt. Metaphysics has done an exceptional job combining actual makeup with digital makeup to produce a photorealistic de-aging.In addition to the visuals, Respeecher and Skywalker Sound also deaged the actors voices, as Baillie discusses in the fxpodcast.Three setsThe filming was done primarily on three sets. There were two identical copies of the room to allow one to be filmed while the other was being dressed for the correct era. Additionally, exterior scenes from before the house was built were filmed on a separate third soundstage.Graphic PanelsGraphic panels serve as a bridge across millions of years from one notionally static perspective. The graphic panels that transitioned between eras were deceivingly tricky, with multiple scenes often playing on-screen simultaneously. As Baillie explains on the podcast, they had to reinvent editorial count sheets and use a special in-house comp team with After Effects as part of the editorial process. LED WallAn LED wall with content from the Unreal Engine was used outside the primary window. As some background needed to be replaced, the team also used the static camera to shoot helpful motion-control style matte passes (the disco passes).The Disco passesFor the imagery in the background Baille knew that it would take a huge amount of effort to add the fine detail needed in the Unreal Engine. He liked the UE output but we wanted a lot of fine detail for the 4K master. Once the environment artists had made their key creative choices, one of the boutique studios and the small in-house team used an AI power tool called Magnific to up-res the images. Magnific was was built by Javi Lopez (@javilopen) and Emilio Nicolas (@emailnicolas), two indie entrepreneurs, and it uses AI to infer additional detail. The advanced AI upscaler & enhancer effectively reimagine much of the details in the image, guided by a prompt and parameters.Before Left After RightMagnific allowed for an immense amount of high-frequency detail that would have been very time-consuming to add traditionally.Here, it has not done exceptionally well at the box office, ( and as you will hear in the next fxguide VFXShow podcast, not everyone liked the film), but there is no doubt that the craft of filmmaking and the technological advances are dramatic. Regardless of any plot criticisms, the film stands as a testament to technical excellence and innovation in the field. Notably, the production respected data provenance in its use of AI. Rather than replacing VFX artists, AI was used to complement their skills, empowering an on-set and post-production team to bring the directors vision to life. While advances in AI can be concerning, in the hands of dedicated filmmakers, these tools offer new dimensions in storytelling, expanding whats creatively possible.
    0 Comments ·0 Shares ·159 Views
  • Agatha All Along with Digital Domain
    www.fxguide.com
    Agatha All Along, helmed by Jac Schaeffer, continues Marvel Studios venture into episodic television, this time delving deeper into the mystique of Agatha Harkness, a fan-favourite character portrayed by Kathryn Hahn. This highly anticipated Disney+ miniseries, serving as a direct spin-off from WandaVision (2021), is Marvels eleventh television series within the MCU and expands the story of magic and intrigue that WandaVision introduced.Filming took place in early 2023 at Trilith Studios in Atlanta and on location in Los Angeles, marking a return for many of the original cast and crew from WandaVision. The production drew on its predecessors visual style but expanded with a rich, nuanced aesthetic that emphasises the eerie allure of Agathas character. By May 2024, Marvel announced the official title, Agatha All Along, a nod to the beloved song from WandaVision that highlighted Agathas mischievous involvement in the original series. The cast features an ensemble, including Joe Locke, Debra Jo Rupp, Aubrey Plaza, Sasheer Zamata, Ali Ahn, Okwui Okpokwasili, and Patti LuPone, all of whom bring fresh energy to Agathas world. Schaeffers dual role as showrunner and lead director allows for a cohesive vision that builds on the MCUs expanding exploration of side characters. After Loki, Agatha All Along has been one of the more successful spin-offs, with the audience number actually growing during the season as the story progressed. Agatha All Along stands out for its dedication to character-driven narratives, enhanced by its impressive technical VFX work and the unique blend of visuals.Agatha All Along picks up three years after the dramatic events of WandaVision, with Agatha Harkness breaking free from the hex that imprisoned her in Westview, New Jersey. Devoid of her formidable powers, Agatha finds an unlikely ally in a rebellious goth teen who seeks to conquer the legendary Witches Road, a series of mystical trials said to challenge even the most powerful sorcerers. This new miniseries is a mix of dark fantasy and supernatural adventure. It reintroduces Agatha as she grapples with the challenge of surviving without her magic. Together with her young protg, Agatha begins to build a new coven, uniting a diverse group of young witches, each with distinct backgrounds and latent abilities. Their quest to overcome the Witches Roads formidable obstacles becomes not only a journey of survival but one of rediscovering ancient magic, which, in turn, requires some old-school VFX.When approaching the visual effects in Agatha All Along, the team at Digital Domain once again highlighted their long history of VFX, adapting to the unique, old-school requirements set forth by production. Under the creative guidance of VFX Supervisor Michael Melchiorre and Production VFX Supervisor Kelly Port, the series visuals present a compelling marriage between nostalgia and cutting-edge VFX. Whats remarkable is the productions call for a 2D compositing approach that evokes the style of classic films. The decision to use traditional compositing not only serves to ground the effects but also gives the entire series a unique texturea rare departure in a modern era dominated by fully rendered 3D environments. Each beam of magic, carefully crafted with tesla coil footage and practical elements in Nuke, gives the witches their distinctive looks while adding a sense of raw, visceral energy. For the broom chase, Digital Domain took inspiration from the high-speed speeder-bike scenes in Return of the Jedi. Working from extensive previs by Matt McClurgs team, the artists skillfully blended real set captures with digital extensions to maintain the illusion of depth and motion. The compositors meticulous worklayering up to ten plates per shotensured each broom-riding witch interacted correctly with the environment. The ambitious sequence, demonstrating technical finesse and a dedication to immersive storytelling. In the death and ghost sequences, Digital Domain took on some of the series most challenging moments. From Agathas decaying body to her rebirth as a spectral entity, these scenes required a balance of CG and 2D compositing that maintained Kathryn Hahns performance nuances while delivering a haunting aesthetic. Drawing from 80s inspirations like Ghostbusters, compositors carefully retimed elements of Hahns costume and hair, slowing them to achieve the ethereal look mandated by the production.As Agatha All Along unfolds, the visuals reveal not only Digital Domains adaptability but a nod to the history of visual effects, an homage to both their own legacy and classic cinema. By tackling the limitations of a stripped-down toolkit with ingenuity, Digital Domain enriched the story with a fresh yet nostalgically layered visuals. Agatha All Along stands out for its blend of good storytelling and layered character development. Each trial on the Witches Road reveals more about Agatha and her evolving bond with her young ally, adding new depth to her character and expanding the lore of the MCU. Fans of WandaVision will find much to love here, as Agathas story unfolds with complex VFX and a touch of wicked humor.
    0 Comments ·0 Shares ·178 Views
  • VFXShow 288: The Penguin
    www.fxguide.com
    This week, the team discusses the visual effects of the HBOs limited series The Penguin from The Batman by director Matt Reeves.The Penguin is a miniseries developed by Lauren LeFranc for HBO. It is based on the DC Comics character of the same name. The series is a spin-off from The Batman (2022) and explores Oz Cobbs rise to power in Gotham Citys criminal underworld immediately after the events of that film.Colin Farrell stars as Oz, reprising his role from The Batman. He is joined by Cristin Milioti, Rhenzy Feliz, Deirdre OConnell, Clancy Brown, Carmen Ejogo, and Michael Zegen. Join the team as they discuss the complex plot, effects and visual language of this highly successful mini-series.The Penguin premiered on HBO on September 19, 2024, with eight episodes. The series has received critical acclaim for its performances, writing, direction, tone, and production values.The VFX are made by: Accenture Song, Anibrain, FixFX, FrostFX, Lekker VFX and PixomondoThe Production VFX Supervisor was Johnny Han, who also served as 2nd Unit Director. Johnny Han is a twice Emmy nominated and Oscar shortlisted artist and supervisor.The Supervillains this week are:Matt Bane Wallin * @mattwallin www.mattwallin.com.Follow Matt on Mastodon: @[emailprotected]Jason Two Face Diamond @jasondiamond www.thediamondbros.comMike Mr Freeze Seymour @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments ·0 Shares ·179 Views
  • Adeles World Record LED Concert Experience
    www.fxguide.com
    Adeles recent concert residency, Adele in Munich, wasnt just a live performance; it was a groundbreaking display of technology and design. Held at the custom-built Adele Arena at Munich Messe, this ten-date residency captivated audiences with both the music and an unprecedented visual experience. We spoke to Emily Malone, Head of Live Events and Peter Kirkup, Innovation Director at Disguise, about how it was done.Malone and Kirkup explained how their respective teams collaborated closely with Adeles creative directors to ensure a seamless blend of visuals, music, and live performance.Malone explained the process: The aim was not to go out and set a world record it was to build an incredible experience that allowed Adeles fans to experience the concert in quite a unique way. We wanted to make the visuals feel as intimate and immersive as Adeles voice. To achieve this, the team used a combination of custom-engineered hardware and Disguises proprietary software, ensuring the visuals felt like an extension of Adeles performance rather than a distraction from it.The Adele Arena wasnt your typical concert venue. Purpose-built for Adeles residency, the arena included a massive outdoor stage setup designed to accommodate one of the worlds largest LED media walls. The towering display, which dominated the arenas backdrop, set a new benchmark for outdoor live visuals, allowing Adeles artistry to be amplified on a scale rarely seen in live music.Adeles recent Munich residency played host to more than 730,000 fans from all over the world, reportedly the highest turnout for any concert residency outside Las Vegas. We are proud to have played an essential role in making these concerts such an immersive, personal and unforgettable experience for Adeles fans, says Emily Malone, Head of Live Events at Disguise.Thanks to Disguise, Adele played to the crowd with a curved LED wall spanning 244 meters, approximately the length of two American football fields. The LED installation was covered with 4,625 square meters of ROE Carbon 5 Mark II (CB5 MKII) in both concave and convex configurations. As a result, it earned the new Guinness World Record title for the Largest Continuous Outdoor LED Screen. The lightweight and durable design of the CB5 MKII made the installation possible. At the same time, its 6000-nit brightness and efficient heat dissipation ensured brilliant, vibrant visuals throughout the whole duration of the outdoor performance.With over 20 years of experience powering live productions, Disguise technology has done an enormous variety of outdoor performances and concerts. For Adeles Munich residency, Disguises team implemented advanced weatherproofing measures and redundant power systems to ensure reliability. Using Disguises real-time rendering technology, the team was able to adapt and tweak visuals instantly, even during Adeles live performances, ensuring a truly immersive experience for the audience.Adele in Munichtook place over 10 nights in a bespoke, 80,000-capacity stadium. This major event called for an epic stage production. Having supported Adeles live shows before, Disguise helped create, sync, and display visuals on a 4,160-square-metre LED wall assembled to look like a strip of folding film.Kirkup was part of the early consultation for the project, especially regarding the feasibility of the project and the deliverability of the original idea for the vast LED screens. There was a lot of discussion about pixel pitch and fidelity, especially as there was an additional smaller screen right behind the central area where Adele would stand. The question was raised if this should be the same LED products as the vast main screen or something more dense; in the end, they landed on using the same LEDs for the best contiguous audience experience, he explained.The Munich residency was unique; there was no template for the team, but their technology scaled to the task. The actual implementation went incredibly smoothly, explains Malone. There was so much pre-production, every detail was thought about so much by all the collaborators on the project. As there was so little time to get the stage and LEDs built on site, it was all extensively pre-tested before the final shipping. It would be so hard to fault find on location- I mean, it took me 15 minutes just to walk to the other end of the LED wall, and lord forbid if you forgot your radio or that one cable and you had to walk back for anything!The 2-hour concerts generated 530 million euros for the city of Munich throughout the ten shows from the concert attendance, with each night having a stadium capacity of 80,000 fans. 8 x Disguise GX3 servers were used to drive the LED wall, and 18 x Disguise SDI VFC cards were required. There was a total pixel count of 37,425,856 being driven, split over 3 actors.Actor 1: Left 7748 x 1560Actor 2: Centre 2912 x 936 + scrolls and infill 5720 x 1568 + lift 3744 x 416Actor 3: Right 7748 x 1560Disguises Designer software was used to preview visuals before going on stage and to control them during the show, which was sequenced to timecode. Given the nature of the live event, there was a main and backup system for full 1:1 redundancy. The footage from Adele singing was shot 4K with Grass Valley cameras. With a live performance, there is a degree of unpredictability, with live shows, says Malone. There was a tight set list of songs which did not change from night to night, all triggered by timecode, but structures were built in so that Adele wanted to speak to the audience or do something special on any night, they could get her closeup face on screen very quickly. Additionally, there is a major requirement to be able to take over the screens at a moments notice for safety messages should something completely unexpected happen. In reality, the screens serve many functions; it is there so the audience can see the artist they are there for, but also it is their safety, for the venue, for the suppliers.This was the first time Adele has played mainland Europe since 2016, and the 36-year-old London singer signed off her final Munich show, warning fans, I will not see you for a long time. Adele, who last released the album 30 in 2021, is set to conclude her shows at The Colosseum at Caesars Palace this month and then is not expected to tour again soon. Both the Vegas and Munich residencies give the singer a high level of creative and logistical control compared with normal live touring a luxury option only available to entertainments biggest stars. Such residences also allow for the investment in such bespoke technical installations. The Munich LED stage would simply not be viable to take on a world tour, both due to its size and how it was crafted explicitly for the German location.Disguise is at the heart of this new era of visual outdoor experiences, where one powerful integrated system of software, hardware and services can help create the next dimension of real-time concert. They have partnered with the biggest entertainment brands and companies in the world such as Disney, Snapchat, Netflix, ESPN, U2 at the Sphere, the Burj Khalifa, and Beyonce.Thanks to the massive technical team; for Adeles fans, Adele in Munich was more than a concertit was an immersive experience, seamlessly blending state-of-the-art visuals with world-class music.
    0 Comments ·0 Shares ·178 Views
  • Slow Horses
    www.fxguide.com
    Slow Horses is an Emmy-nominated funny espionage drama that follows a team of British intelligence agents who serve in a dumping ground department of MI5 due to their career-ending mistakes, nicknamed Slow Horses (from Slough House). The team is led by their brilliant but cantankerous and notorious Jackson Lamb (Academy Award winner Gary Oldman). In Season 4, the team navigate the espionage worlds smoke and mirrors to defend River Cartwrights (Jack Lowden) father from sinister forces.Academy Award winner Gary Oldman as Jackson LambSeason 4ofSlow Horsespremiered on September 4, 2024 onApple TV+. It is also subtitled Spook Street after the fourth book of the same name. Union VFX handled the majority of the visual effects, and the VFX Supervisor was Tim Barter (Poor Things).In the new season, the key VFX sequences beyond clean-up work and stunt work included the London explosion, the Paris chteau fire, the explosion in the canal, and the destruction of the West Acres shopping mall. Union had done some work on season 3, and they were happy to take an even more prominent role in the new season. In season 4, they had approximately 190 shots and 11 assets. For season 3, they worked on approximately 200 shots and 20 assets but were not the lead VFX house.https://www.fxguide.com/wp-content/uploads/2024/10/Slow-Horses--Season-4Trailer.mp4Union VFX is an independent, BAFTA-winning, visual effects studio founded in 2008, based in Soho, with a sister company in Montral. Union has established an strong reputation for seamless invisible effects on a wide range of projects building strong creative relationships with really interesting directors including Danny Boyle, Susanne Bier, Martin McDonagh, Marjane Satrapi, Sam Mendes, Fernando Meirelles & Yorgos Lanthimos.The Union VFX team used a mixture of practical effects, digital compositing, and digital doubles/face replacement to achieve the desired VFX for the show. Interestingly, at one point, a hand grenade had to be tossed in a canal after it was placed in Rivers hoodie. Not only was the water added digitally for the normal VFX reasons that one might imagine, such as an explosion going off near the hero actors, but also water in the canal isnt actually fit to be splashed on anyone, it just isnt clean water, so the team did the water explosion fully digitally. Simiarly, the shopping mall at Westacres, which was meant to have 214 retail stores, 32 restaurants and 8 cinema screens was not actually blown up, but in fact the location wasnt even in London and the background was all done with digital matte paintings to look like a real Westfield Shopping Centre hence the fictional equivalents similar name.After the suicide bomb goes off at the Westacres shopping mall in London, carried out by Robert Winters. After Winters publishes a video confessing to the attack, a police force breaks into his flat, but three of the MI5 dogs are killed by a booby trap. This explosion was genuinely shot in a backlot, and then integrating that into the plate photography of a block of flats.The Park which is the hub of the MI5 operations has been seen before since season one but each season it is slightly different causing Tim Barter to analyse all the previous seasons work to try and build a conceptual model of what The Park building would actually look like, so that they could have continuity in season four with various interior, exterior and complex car park shots.In season four there was a requirement to do probably one night and five daytime aerial views of it from different angles, explains Tim Barter. We got to create a whole sections of the Park that have never been created before, so I was there going through the previous seasons, looking at all the peripheral live action shots that were all shot in very different actual locations. Its like, there is this section where River comes out of the underground car park, and then he gets into this little door over here, which then goes through here on the side of this. And all the time, Im trying to retroactively creating that architecture (digital) of the Park, to be faithful to the previous seasons.After the Harkness breaks into Mollys apartment and forces her to give him her security credentials, he tracks the Rivers convoy of Dogs and sends the assassin Patrice to intercept. After slamming a dump truck into the convoy, Patrice kills four Dogs and kidnaps River. This extensive sequence was shot in London at night in the financial district, but that part of London is still an area where a lot of people live. So there is no opportunity to have the sound of guns or blank muzzle flashes, Tim explains. It was all added in post. The dump truck that smashes into the SUV was also not able to be done in London. It was filmed at a separate location and then the aftermath was recreated by the art department in the real financial district for filming. The dump truck actually ramming the car was shot with green screens and black screens and lots of camera footage. We actually used much less than we shot, but we did use the footage to make up a series of plates so we could composite it successfully over a digital background.River Cartwright (Jack Lowden) a British MI5 agent assigned to Slough House.For the scene where jackson Lamb hits the assassin with a taxi they started with an interior garage as a clean plate and then had a statement on wires tumbling over it various angles and then married this together. Actually we ended up with marrying three Live action plates. We had the garage, the green screen plate of the stunt actor, but then we also got clean plates of the garage interior as we were removing and replacing certain things in the garage, Tim comments. We also had to do some instances of face replacement for that.Another instance of face replacement was the half brother of River who gets killed at the beginning of the series in Rivers fathers bathroom. Originally, this was a dummy in the bathtub but it looked a bit too obvious that it was fake, so an actor was cast and then the team re-projected the actors face onto the dummy in Nuke. Of course, there was also a lot of blood and gore in the final shot.Hugo Weaving as Frank Harkness, an American mercenary and former CIA agent.The film was shot at 4K resolution with exception of some drone footage which had to be stabilised and used for visual effects work so some instances the drone footage was 6K. This allowed the extra room to tighten up the shots, stabilise and match to any practical SFX or explosions.
    0 Comments ·0 Shares ·192 Views
  • Wonder Dynamics up the game with AI Wonder Animation
    www.fxguide.com
    One popular application of early GenAI was using style transfer to create a cartoon version of a photograph of a person. Snapchat also enjoyed success with Pixar-style filters that made a person seem to be an animated character, but these could be considered as effectively image processing.Runway has recently shown Act-One, a new tool that can be used to build an expressive and controllable tool for artists to generate expressive character performances using Gen-3 Alpha. Act-One can create cartoon animations using video and voice performances as inputs to generative models that can turn expressive live-action and animated content.Wonder Dynamics has escalated this to a new and interesting level with Wonder Animation, but outputing 3D not 2D content.Wonder Dynamics, an Autodesk company, has announced the beta launch of Wonder Studios newest feature: Wonder Animation, which is powered by a first-of-its-kind video-to-3D scene technology that enables artists to shoot a scene with any camera in any location and turn the sequence into an animated scene with CG characters in a 3D environment.Wonder AnimationThe original Wonder Studio Video-to-3D CharacterIn May, Autodesk announced that Wonder Dynamics, the makers of Wonder Studio, would become part of Autodesk. Wonder Studio first broke through as a browser-based platform that allowed people to use AI to replace a person in a clip with a computer-generated character. It effortlessly allowed users to replace a live-action actor with a Mocap version of the digital character. The results and effectiveness of the original multi-tool machine learning / AI approach were immediately apparent. From shading and lighting to animation and ease of use, Wonder Studio was highly successful and had an impact almost immediately.The most innovative part of the new Wonder Animations video-to-3D scene technology is its ability to assist artists while they film and edit sequences with multiple cuts and various shots (wide, medium, close-ups).Maya exportThe technology then uses AI to reconstruct the scene in a 3D space and matches the position and movement of each cameras relationship to the characters and environment. This essentially creates a virtual representation of an artists live-action scene containing all camera setups and character body and face animation in one 3D scene. Note, it does not convert the video background environment to specific 3D objects, but it allows the 3D artist to place the Wonder Dynamics 3D characters into a 3D environment where before, they were only placed back into the original video background.This is entirely different from a style transfer or an image processing approach. The output from Wonder Animations video-to-3D scene technology is a fully editable 3D animation. The output contains 3D animation, character, environment, lighting, and camera tracking data available to be loaded into the users preferred software, such as Maya, Blender, or Unreal.Even though there have been tremendous advancements in AI, there is a current misconception that AI is a one-click solutionbut thats not the case. The launch of Wonder Animation underscores the teams focus on bringing the artist one step closer to producing fully animated films while ensuring they retain creative control. Unlike the black-box approach of most current generative AI tools on the market, the Wonder Dynamics tools are designed to allow artists to actively shape and edit their vision instead of just relying on an automated output.Wonder Animations beta launch is now available to all Wonder Studio users. The team aims to bring artists closer to producing fully animated films. We formed Wonder Dynamics and developed Wonder Studio (our cloud-based 3D animation and VFX solution) out of our passion for storytelling coupled with our commitment to make VFX work accessible to more creators and filmmakers, comments co-founder Nikola Todorovic. Its been five months since we joined Autodesk, and the time spent has only reinforced that the foundational Wonder Dynamics vision aligns perfectly with Autodesks longstanding commitment to advancing the Media & Entertainment industry through innovation.Here is the official release video:
    0 Comments ·0 Shares ·176 Views
  • Cinesite for a more perfect (The) Union
    www.fxguide.com
    Netflixs The Union is the story of Mike, (Mark Wahlberg), a down-to-earth construction worker who is thrust into the world of superspies and secret agents when his high school sweetheart, Roxanne, (Halle Berry) recruits him for a high-stakes US intelligence mission.Mike undergoes vigorous training, normally lasting six months but condensed to under two weeks. He is given another identity, undergoes psychological tests, and is trained in hand-to-hand combat and sharpshooting before being sent on a complex mission with the highly skilled Roxanne. As one might expect, this soon goes south, and the team needs to save the Union, save the day, and save themselves.Julian Farino directed the film, Alan Stewart did the cinematography, and Max Dennison was Cinesites VFX supervisor. The film was shot on the Sony Venice using an AXS-R7 at 4K andPanavision T Series Anamorphic Lenses.Cinesites VFX reel:https://www.fxguide.com/wp-content/uploads/2024/10/Cinesite-The-Union-VFX-Breakdown-Reel.mp4During training Mike learns to trust Roxannes instructions in a dangerous running blind exersice. The precipitous drop makes the sequence really interesting and this was achieved by digital set removal so that the actors were not in mortal danger. Cinesites clean composting and colour correction allowed for the many car chase sequences and complex driving sequences in the film. This included a motorcycle chases, hazardous driving around London and a very complex three car chase.The three car chase with the BMW, Porsche and Ford was shot in Croatia and on this primarily used pod drivers or drivers sitting in a pod on top of the car doing the actual driving while the actors below simulating driving. We had to remove the pods, replace all the backgrounds, put new roofs on the cars replace the glass, and on some wider shots do face replacement when stunt drivers were actually driving the cars without pods, Max outlines. The team did not use AI or machine learning for the face replacements but rather, all the face replacements were based of cyber scans of the actual actors. This was influenced by the fact that in the vast majority of cases, the team didnt have to animate the actors faces, as the action sequences are cut so quickly and the faces are only partially visible through the cars windows. For for the motorbike chase above the motorbikes were driven went on location by stunt people whose faces were replaced with those of the lead actors. We had scans done of Mark and Halles faces so that we could do face replacement digitally through the visors of the helmets, explains Max Dennison. It is one of those films that liked to show London and all the sights of London, so we see Covent Garden, Seven Dials, Piccadilly Circus- a bit of a fun tourist trip, comments Max. Given the restrictions in being able to film on the streets of London the initial plan was to shoot in an LED volume. Apparently, the filmmakers explored this but preferred instead to shoot green screen and the results stack up very well. When Mike comes out of the Savoy hotel and drives on the wrong side of the road, all those exterior environments were replaced by us from array photography, he adds. Cinesite has a strong reputation for high end seamless compositing and invisible visual effects work, but in The Union the script allowed for some big action VFX sequences which are both exciting and great fun. The opening sequence of the suitcase exchange that goes wrong, the team was required to produce consistent volumetric effects as in the beginning of that sequence its raining and by the end it is not. Given that the shots were not shot in order, nor with the correct weather, the team had about 20 VFX shots to transition from mild rain to clearing skies through complex camera moves and environments around London.In addition to the more obvious big VFX work there was wire removal, set extension and cleanup work required for the action sequences. Shot in and around the correct crowded actual locations, there was still a need to use digital matte paintings, set extensions and apply digital cleanup for many of the exteriors. For the dramatic fall through the glass windows, stunt actors fell using wires and then the team not only replaced the wires but also did all of the deep background and 3D environments around the fall sequence. In the end the team also built 3-D class windows, as it was much easier to navigate the wire removal when they had control of the shattering glass. This was coupled with making sure that their clothes were not showing the harnesses or being pulled by the wires in ways that would give away how the shot was done.The film shot in New Jersey, (USA), London, (England), Slovenia, Croatia and Italy (street scenes). Principle photography was at Shepparton Studios, as the film was primarily London based. A lot of the stunt work was done in Croatia, at a studio set built for the film, especially for the roof top chase. Unlike some productions, the filmmakers sensibly used blue and green screen where appropriate allowing for the film to maximise the budget on screen and produce elaborate high octane action chase and action sequences. In total, Cinesite did around 400 shots. Much of this was Cinesite trademark invisible VFX work based on clever compositing, and very good eye matching of environments, lighting and camera focus/DOF.Given where the film finishes, perhaps this is the start of a Union franchise? Films such as The Union are fun engaging action films that have done very well for Netflix, often scoring large audiences even when not as serious or pretentious as the Oscar nominated type films that tend to gain the most publicity. And in the end this is great work for the VFX artists and post production crews.
    0 Comments ·0 Shares ·188 Views
  • fxpodcast #377: Virtually Rome For Those About To Die
    www.fxguide.com
    For Those About To Die is an epic historical drama series directed by Roland Emmerich. The director is known as the master of disaster. This was his first move into series television, being very well known for his Sci-Fi epics such as Independence Day, Godzilla, The Day After Tomorrow, White House Down, and Moonfall.The director, Roland Emmerich on the LED Volume (Photo by: Reiner Bajo/Peacock)Pete Travers was the VFX supervisor onFor Those About To Die. The team used extensive LED virtual production, with James Franklin as the virtual production supervisor. We sat down with Pete Travers and James Franklin to discuss the cutting-edge virtual production techniques that played a crucial role in the series completion.THOSE ABOUT TO DIE Episode 101 Pictured: (l-r) (Photo by: Reiner Bajo/Peacock)The team worked closely with DNEG as we discuss in this weeks fxpodcast. We discuss how virtual production techniques enhanced the efficiency and speed of the 1800 scenes which were done with virtual production and how this meant the production only needed 800 traditional VFX shots to bring ancient Rome to life and enabled the 80,000-seat Colosseum to be filled with just a few people.The LED volume stages were at Cinecitta Studios in Italy, with a revolving stage and the main backlot right outside the stage door. As you will hear in the fxpodcast, there were two LED volumes. The larger stage had a rotating floor, which allowed for different angles to be filmed of the same physical set (inside the volume), and the floor rotated, and so could the images on the LED walls.From Episode 101 (in camera)The actual LED set for that setup (Photo by: Reiner Bajo/Peacock)We discuss in the podcast how the animals responded to the illusion of space that an LED stage provides, how they managed scene changes not to upset the horses, and how one incident had the crew running down the street outside the stage chasing runaway animals!The shot in cameraBehind the scenes of the same shotThe team shot primarily on the Sony Venice 2, but the director is known for big wide-angle lens shots but trying to film an LED stage on a 14mm lens can create serious issues.From Episode 108 (final shot)Crew on set, in front of the LED wall of the Colosseum. Photo by: Reiner Bajo/PeacockThe team also produced fully digital 3D VFX scenes.
    0 Comments ·0 Shares ·229 Views
  • Q&A with DNEG on the environment work in Time Bandits
    www.fxguide.com
    Jelmer Boskma was the VFX Supervisor at DNEG on Time Bandits (AppleTV+). The show is a modern twist on Terry Gilliams classic 1981 film. The series about a ragtag group of thieves moving through time with their newest recruit, an eleven-year-old history nerd, was created by Jemaine Clement, Iain Morris, and Taika Waitit. It stars Lisa Kudrow as Penelope, Kal-El Tuck as Kevin, Tadhg Murphy as Alto, Roger Jean Nsengiyumva as Widgit, Rune Temte as Bittelig, Charlyne Yi as Judy, Rachel House as Fianna and Kiera Thompson as Saffron.In addition to the great environment work the company did, DNEG 360, a division of DNEG, which is a partnership with Dimension Studio, delivered virtual production services for Time Bandits. FXGUIDE: When did you start on the project?Jelmer Boskma: Post-production was already underway when I joined the project in March 2023, initially to aid with the overall creative direction for the sequences awarded to DNEG. FXGUIDE: How many shots did you do over the series?Jelmer Boskma: We delivered 1,094 shots, featured in 42 sequences throughout all 10 episodes. Our work primarily involved creating environments such as the Fortress of Darkness, Sky Citadel, Desert, and Mayan City. We also handled sequences featuring the Supreme Beings floating head, Pure Evils fountain and diorama effects, as well as Kevins bedroom escape and a number of smaller sequences and one-offs peppered throughout the season.FXGUIDE:? And how much did the art department map this out and how much were the locations down to your team to work out?Jelmer Boskma: We had a solid foundation from both the art department and a group of freelance artists working directly for the VFX department, providing us with detailed concept illustrations.The design language and palette of the Sky Citadel especially was resolved to a large extent. For us it was a matter of translating the essence of that key illustration into a three-dimensional space and designing several interesting establishing shots. Additional design exploration was only required on a finishing level, depicting the final form of the many structures within the citadel and the surface qualities of the materials from which the structures were made. The tone of the Fortress of Darkness environment required a little bit more exploration. A handful of concept paintings captured the scale, proportions and menacing qualities of the architecture, but were illustrated in a slightly looser fashion. We focused on distilling the essence of each of these concepts into one coherent environment. Besides the concept paintings we did receive reference in the form of a practical miniature model that was initially planned to be used in shot, but due to the aggressive shooting schedule could not be finished to the level where it would have worked convincingly. Nonetheless it served as a key piece of reference for us to help capture the intent and mood of the fortress.Other environments like the Mayan village, the besieged Caffa fortress, and Mansa Musas desert location were designed fully by our team in post-production. FXGUIDE: The Mayan village had a lot of greens and jungle was there much practical studio sets?Jelmer Boskma: We had a partial set with some foliage for the scenes taking place on ground level. The establishing shots of the city, palace and temple, as well as the surrounding jungle and chasm, were completely CG. We built as much as we could with 3D geometry to ensure consistency in our lighting, atmospheric perspective and dynamism in our shot design. The final details for the buildings as well as the background skies were painted and projected back on top of that 3D base. To enhance realism, the trees and other foliage were rendered as 3D assets allowing us to simulate movement in the wind. FXGUIDE: Were the actors filmed on green/blue screen?Jelmer Boskma: In many cases they were. For the sequences within Mansa Musas desert camp and the Neanderthal settlement, actors were shot against DNEG 360s LED virtual production screens, for which we provided real-time rendered content early on in production. To ensure that the final shots were as polished and immersive as possible, we revisited these virtual production backdrops in Unreal Engine back at DNEG in post. This additional work involved enhancing the textural detail within the environments and adding subtle depth cues to help sell the scale of the settings. Access to both the original Unreal scenes and the camera data was invaluable, allowing us to work directly with the original files and output updated real-time renders for compositing. While it required careful extraction of actors from the background footage shot on the day, this hybrid approach of virtual production and refinement in post ultimately led to a set of pretty convincing, completely synthetic, environments. FXGUIDE: Could you outline what the team did for the Fortress of Darkness?Jelmer Boskma: The Fortress of Darkness was a complex environment that required extensive 3D modelling and integration. We approached it as a multi-layered project, given its visibility from multiple angles throughout the series. The fortress included both wide establishing shots and detailed close-ups, particularly in the scenes during the seasons finale.For the exterior, we developed a highly detailed 3D model to capture the grandeur and foreboding nature of the fortress. This included creating intricate Gothic architectural elements and adding a decay effect to reflect the corrosive, hostile atmosphere surrounding the structure. The rivers of lava, which defy gravity and flow towards the throne room, were art directed to add a dynamic and sinister element to the environment and reinforce the power Pure Evil commands over his realm.Inside, we extended the practical set, designed by Production Designer Ra Vincent, to build out the throne room. This space features a dramatic mix of sharp obsidian and rough rock textures, which we expanded with a 3D background of Gothic ruins, steep cliffs, and towering stalactites. To ensure consistency and realism, we rendered these elements in 3D rather than relying on 2.5D matte paintings, allowing for the dynamic lighting effects like fireworks and lightning seen in episode 10. FXGUIDE: What was the project format was it 4k or 2k (HDR?) and what resolution was the project shot at primarily?Jelmer Boskma: The project was delivered in 4K HDR (3840 x 2160 UHD), which was also the native resolution at which the plates were photographed. To manage render times effectively and streamline our workflow, we primarily worked at half resolution for the majority of the project. This allowed us to focus on achieving the desired creative look without being slowed down by full-resolution rendering. Once the compositing was about 80% complete and creatively aligned with the vision of the filmmakers, we would switch to full-resolution rendering for the final stages.The HDR component of the final delivery was a new challenge for many of us and required a significant amount of additional scrutiny during our tech check process. HDR is incredibly unforgiving as it reveals any and all information held within each pixel on screen, whether its within the brightest overexposed areas or hiding inside the deepest blacks of the frame. FXGUIDE: Which renderer do you use for environment work now?Jelmer Boskma: For Time Bandits we were still working within our legacy pipeline, rendering primarily inside of Clarisse. We have since switched over to a Houdini centric pipeline where most of our rendering is done through Renderman.FXGUIDE: How completely did you have to make the sets for example, for Sky Citadel did you have a clear idea of the shooting angles needed and the composition of the shots, or did you need to build the environments without full knowledge of how it would be shot?Jelmer Boskma: I would say fairly complete, but all within reason. We designed the establishing shots as we were translating the concept illustrations into rough 3D layouts. Once we got a decent idea of the dimensions and scale of each environment, we would pitch a couple of shot ideas that we found interesting to feature the environment in. It would not have made sense to build these environments to the molecular level, as the schedule would not have allowed for that. In order to be as economical as possible, we set clear visual goals and ensured that we focussed our time only on what we are actually going to see on screen. Theres nuance there of course as we didnt want to paint ourselves into a corner, but with the demanding overall scope that Time Bandits had, and with so many full CG environment builds to be featured, myself and DNEGs producer Viktorija Ogureckaja had to make sure our time was well-balanced. FXGUIDE:Were there any particular challenges to the environment work?Jelmer Boskma: The most significant challenge was working without any real locations to anchor our environments. For environments like the Fortress of Darkness, Sky Citadel, Mayan City, and Caffa, we were dealing with almost entirely synthetic CG builds. For the latter two, we incorporated live-action foreground elements with our actors, but the core environments were fully digital.Creating a sense of believability in completely CG environments requires considerable effort. Unlike practical locations, which naturally have imperfections and variations, CG environments are inherently precise and clean, which can make them feel less grounded in reality. To counteract this, we needed to introduce significant detail, texture, and imperfections to make the environments look more photorealistic.Additionally, our goal was not just to create believable environments but also to ensure they were visually compelling. The production of these larger, establishing shots consumed a significant portion of our schedule, requiring careful attention to both the technical and aesthetic aspects of the work.The contributions made by all of the artists involved on this show was vital in achieving both these goals. Their creativity and attention to detail were crucial in transforming initial concepts into visual striking final shots. Reflecting on the project, its clear that the quality of these complex environments was achieved through the skill and dedication of our artists. Their efforts not only fulfilled the projects requirements but also greatly enhanced the visual depth and supported the storytelling, creating immersive settings that, I hope, have managed to captivate and engage the audience.
    0 Comments ·0 Shares ·203 Views
More Stories