fxguide | fxphd
fxguide | fxphd
fxphd.com is the leader in pro online training for vfx, motion graphics, and production. Turn to fxguide for vfx news and fxguidetv & audio podcasts.
  • 2 people like this
  • 22 Posts
  • 2 Photos
  • 0 Videos
  • 0 Reviews
  • News
Search
Recent Updates
  • WWW.FXGUIDE.COM
    VFXShow 290: Wicked
    In this episode, we dive into the extraordinary visual effects of Universal Pictures Wicked, directed by Jon M. Chu. As a prequel to The Wizard of Oz, the film not only tells the origin story of Elphaba and Glindatwo witches whose lives take drastically different pathsbut also transports audiences to a vividly reimagined Oz.Join us as we discuss the VFX that brings Ozs animals, spellbinding magic, and the stunning world of Shiz University to life. We break down the challenges of creating the films effects for pivotal scenes, such as the gravity-defying flight sequences and the Wizards grand, mechanical illusions. Plus, we discuss how the visual effects team balanced fantastical elements with the emotional core of the story, ensuring the magic of Oz feels both epic and personal.In this VFXShow, we discuss and review the film, but later this week, we will also post our interview with Wickeds VFX supervisor, Pablo Helman.Dont forget to subscribe to both the VFXShow and the fxpodcast to get both of our most popular podcasts.This week in OZ:Matt Lion Wallin * @mattwallin www.mattwallin.com.Follow Matt on Mastodon: @[emailprotected]Jason Scarecrow Diamond @jasondiamond www.thediamondbros.comMike Tin Man Seymour @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments 0 Shares 6 Views
  • WWW.FXGUIDE.COM
    The issue of training data: with Grant Farhall, Getty Images
    AsChief Product Officer, Grant Farhall is responsible for Getty Images overall product strategy and vision. We sat down with Grant to discuss the issue of training data, rights and Getty Images strong approach.Training dataArtificial Intelligence, specifically the subset of generative AI, has captured the imagination and attention of all aspects of media and entertainment. Recent rapid advances seem to humanize AI in a way that has caught the imagination of so many people. It has been born from the nexus of new machine learning approaches, foundational models, and, in particular, advanced GPU-accelerated computing, all combined with impressive advances in neural networks and data science.One aspect of generative AI that is often too quickly passed over is the nature and quality of training data. It can sometimes be wrongly assumed that in every instance, more data is good any data, just more of it. Actually, there is a real skill in curating training data.Owning your ownGenerative AI is not limited to just large corporations or research labs. It is possible to build on a foundation model and customize it for your application without having your own massive AI factory or data centre.It is also possible to create a Generative AI model that works only on your own training data. Getty Images does exactly this with its iStock, Getty Creative Images, and API stock libraries. These models are trained on only the high-quality images approved for use, using NVIDIAs Edify NIM built on Picasso.NVIDIA developed the underlying architecture. Gettys model is not a fine-tuned version of a foundational model. It is only trained on our content, so it is a foundational model in and of itself Grant Farhall, Getty ImagesGetty produces a mix of women when prompted with Woman CEO, closeup.BiasPeople sometimes speak of biases in training data, and this is a real issue, but data scientists also know that carefully curating training data is an important skill. This is not an issue of manipulating data but rather providing the right balance in the training data to produce the most accurate results. As part of the curation process is getting enough data of the types needed and often with metadata that helps the deep learning algorithms.Particularly the nature of what data exists in the world already and the qualities of that data that can be used to make the most effective generative AI tool. At first glance, one might assume you just want the greatest amount of ground truth or perfect examples possible, but that is not how things actually work in practice.It is also key that the output responses to prompts provide a fair and equitable mix, especially when dealing with people. Stereotypes can be reinforced without attention to output bias.ProvenanceIt is important to know if the data used to build the generative AI model was licensed and approved for this use. Many early academic research efforts scraped the Internet for data since their work was non-commercial and experimental. We have since come a long way in understanding, respecting, and protecting the rights of artists and people in general, and we have to protect their work from being used without permission. As you can hear in this espisode of the podcast, companies such as Getty Images pride themselves on having clean and ethically sourced generative AI models that are free from compromise and artist exploitation. In fact, they offer not only compensation for artists whose work is used as training data but also guarantees, in some cases, indemnifies against any possible future issues over artists rights.The question that is often asked is, Can I use these images from your AI generator in a commercial way, in a commercial setting? Most services will say yes, says Grant Frarhall of Getty Images. The better question is, can I use these images commercially, and what level of legal protection are you offering me if I do? As Getty knows the provenance of every image used to train their model, their corporate customers enjoy fully uncapped legal indemnification.Furthermore, misuse is impossible if the content is not in the training model. Frarhall points out, There are no pictures of Taylor Swift, Travis Kelsey, athletes, musicians, logos, brands, or any similar stuff. None of thats included in the training set, so it cant be inappropriately generated.AI Generator imageRights & CopyrightFor centuries, artists have consciously or subconsciously drawn inspiration from one another to influence their work. However, with the rise of generative AI, it is crucial to respect the rights associated with the use of creative materials.A common issue and concern is copyright and this is an important area, but it is one open to further clarification and interpretation as governments around the world respond to this new technology. As it stands, only a person can own copyright, it is not possible for a non-human to own copyright. It is unclear how open the law is to training on material without explicit permission worldwide, as Generative AI models do not store a copy of the original.However, it is illegal in most contexts to pass off any material in a way that misrepresents, such as implying or stating the work was created by someone who did not. It is also illegal to use the likeness of someone to sell or promote something without their permission, regardless of how that image was created. The laws in each country or territory need to be clarified, but, as a rule of thumb, Generative AI should restricted by an extension of existing laws such as defamation, exploitation, and privacy rights. These laws can come into play if the AI-generated content is harmful or infringing on someones rights.In addition, there are ongoing discussions about the need for new laws or regulations specifically addressing the unique issues raised by AI, such as the question of who can be held responsible for violations using AI-generated content. It is important to note that just because a generative piece of art or music is stated as being approved for commercial use, that does not imply that the training data used to build the model was licensed and respected all contributing artists appropriately.Generative AIThis fxpodcast is not sponsored, but is based on research done for the new Field Guide to Generative AI. fxguides Mike Seymour was commissioned by NVIDIA to unpack the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The Field Guide is free and can be downloaded here: Field Guide to Generative AI. In M&E, generative AI has proven itself a powerful tool for boosting productivity and creative exploration. But it is not a magic button that does everything. Its a companion, not a replacement. AI lacks the empathy, cultural intuition, and nuanced understanding of a storys uniqueness that only humans bring to the table. But when generative AI is paired with VFX artists and TDs, it can accelerate pipelines and unlock new creative opportunities.
    0 Comments 0 Shares 5 Views
  • WWW.FXGUIDE.COM
    Hands-on with the URSA Cine 12K LF + URSA Cine Immersive
    Ben Allan ACS CSI, test drives and reviews the URSA Cine 12K LF, with Oscar winner Bruce Beresford and also with our own fxcrew.Blackmagic Design refers to their much anticipated high-end camera as the first cine camera design. This camera is also the basis for the new Blackmagic URSA Cine Immersive, which the company announced today is now available to pre-order from Blackmagic Design.URSA Cine Immersive cameraThe new Blackmagic URSA Cine Immersive camera will be the worlds first commercial camera system designed to capture Apple Immersive Video for Apple Vision Pro (AVP), with deliveries starting in early 2025. DaVinci Resolve Studio will also be updated to support editing Apple Immersive Video early next year, offering professional filmmakers a comprehensive workflow for producing Apple Immersive Video for Apple Vision Pro. Rumours have been incorrectly claiming Apple is moving away from the AVP, this is clearly not the case. The AVP: Immersive Video format is a remarkable 180-degree media format that leverages ultra-high-resolution immersive video and Spatial Audio to place viewers in the center of the action.Blackmagic URSA Cine Immersive will feature a fixed, custom lens system pre-installedThe Blackmagic URSA Cine Immersive will feature a fixed, custom lens system pre-installed on the body, which is explicitly designed to capture Apple Immersive Video for AVP. The sensor can deliver 8160 x 7200 resolution per eye with pixel level synchronisation and an impressive 16 stops of dynamic range. Cinematographers will be able to shoot 90fps 3D immersive cinema content to a single file. The custom lens system is designed for URSA Cines large format image sensor with extremely accurate positional data thats read and stored at time of manufacturing. This immersive lens projection data which is calibrated and stored on device then travels through post production in the Blackmagic RAW file itself.Hands-on with Ben Allan, ACS CSI.Ben tested one of the first URSA Cine 12K LF camera.URSA Cine 12K LF : Cinema or Cine?BMDs has used the terms cinema and digital film camera for most of the cameras they have made. This a reflection of the fact that when they first started to produce cameras, they already had the prestige post-production software DaVinci Resolve in their stable, so a big part of the equation in starting to produce their own cameras was to fill the gap of an affordable, small camera which could produce images which were suitable for colour grading with the powerful tools already available in Resolve.Their original Cinema Camera was introduced in 2012 with an unconventional design and, crucially, recording very high-quality files in either ProRes or Cinema DNG RAW. It is probably hard to put this in the proper perspective now, but at the time, it sparked a little revolution in the industry where high-end recording formats were tightly tied to the most expensive cameras. Only a few years earlier, RED had started a similar revolution at a time when Super 35mm sized single sensors which could be used with existing cinema lenses, was the province of the most expensive digital cameras from Sony and Panavision. At the same time, RAW recording for moving pictures was essentially a pipe-dream.By showing that these things could be delivered to the market in a working camera system at somewhere near a 10th of the cost, RED catapulted the whole industry forward and forced the existing players to fast-track the new technologies into the families of high-end cameras were all familiar with today such as Sonys F-series, Panavisions RED-based DXLs, ARRIs ALEXAs and Canons Cinema Line.The other revolution which had also occurred recently was the advent of HD video recording in DSLRs with the introduction of the Canon 5D-II. While this suddenly gave people a low-cost way of recording cinematic images with shallow depth of field coming from a single large sensor, the 5D-II and the next few generations of video-capable DSLRs were limited by highly compressed 8-bit recording. The effects of this would sometimes only become apparent when the images were colour-graded and DCT blocking from the compression and banding from the 8-bit recording became difficult or even impossible to remove.BMDs choice to offer both 10-bit ProRes and 12-bit Cinema-DNG RAW recording removed the quality bottle-neck and allowed users with limited budgets or needing a light and compact camera to record in formats that met or exceeded the specifications of a 2K film scan, which was still the standard for cinema production at the time.What BMD did by releasing a camera that could match the file format quality of the high-end cameras with low or no compression and high bit depth and at a tiny percentage of the cost from even the original RED showed that these features neednt be kept for the top-shelf cameras alone, sparking the other manufacturers to allow these features to trickle down into their more affordable cameras as well.Since then, Blackmagic has evolved and expanded its range of cameras year after year and is now easily one of the most significant players in the professional motion image camera market.From the Pocket Cinema Cameras at the entry-level, to the various incarnations of the URSA Mini Pro platform, all of these cameras delivered varying degrees of film-like dynamic range combined with recording formats that provided the basis for intensive colour grading and VFX work. Since the DSLR revolution, there has been an explosion in the options for rigging cameras for cine style shooting, and all of these BMD cameras could and were extensively rigged in this way.For over a decade, BMD has been releasing cameras that record in cinema-friendly formats, optimised for high-end post-production requirements and routinely rigged for cinema-style shooting, so why call this new camera their first cine camera? I think this is their way of explaining succinctly that this is a camera designed from the ground up for film production style shooting.The Cine BenchmarkWhen we think of the modern motion picture film camera, the benchmark both in a practical sense and in the popular consciousness in the Panavision Panaflex, with its big white film magazine sitting on top, it is the very essence of what people both inside and outside the industry feel a movie camera should look.But a huge part of the success of the Panaflex since its introduction in the 1970s is that the camera itself was designed and evolved as part of a cohesive ecosystem that was modular, flexible and, most importantly reliable. In creating this, Panavision set expectations for crews and producers of what a professional camera system needed to be. This philosophy has flowed through in varying ways to all of the high-end digital camera systems used today. Take the ARRI ALEXA 35, for example, with a modular design that can be quickly and easily optimised for a wide range of shooting styles and requirements, has all the connections required for professional work, including multiple SDI outputs, power for accessories and wireless control.In this context, it starts to become very clear what BMD have done with the URSA Cine platform; they have designed a system that is driven by this cinema camera philosophy rather than, say their DSLR-styled Pocket cameras or the TV-inspired URSA Mini Pro range. Different design philosophies for different purposes.The URSA Cine 12K LF is the first camera to be released from the URSA Cine line ahead of the URSA Cine 17K with its 65mm film-sized image sensor and the URSA Cine Immersive stereoscopic camera being developed with Apple and optimised for capturing films in the 180 Immersive format for the Apple Vision Pro. While these other two cameras are much more niche, special-purpose tools, the Cine LF is very much a mainstream production camera system. An Operators CameraWhen it comes to actually using the Cine LF it becomes very clear what a mainstream system it is. It is a very operator-friendly camera that is well thought out. Although it is a little bigger and heavier than the URSA Mini Pro cameras, it is still significantly smaller and lighter than a full-sized ALEXA, which is of itself much smaller and lighter than something like a fully loaded Panaflex.The Cine LF is packaged in one of two kits, both well-kitted up and pretty much ready to shoot but with and without the EVF. I suspect the EVF kit will be by far the more popular, as the viewfinder is as good as any Ive ever used. It is sharp and clear, the contrast is exceptionally good, the colour rendition is incredibly accurate, and all in a very compact unit. The EVF connects to the camera via a single, locking USB-C cable which carries power, picture and control. Not only is this convenient, it allows the EVF to be thoroughly controlled from the cameras touchscreen menu. This is dramatically easier and quicker than the URSA Mini Pros EVF menu system. In addition to the EVF function buttons, there is even a record trigger on the EVF itself. In certain situations, this could be an extremely useful feature, particularly when the camera is wedged into a tight spot.The EVF is mounted using a system that attaches quickly to the top handle and allows the viewfinder to be positioned with a high degree of freedom. The kit also includes a viewfinder extension mount which is very quick and easy to attach and remove and can be used with or without an eyepiece leveller. With all of these elements, it is easy to position the EVF wherever the operator needs it and then firmly lock it in place. The way all these pieces fit together is solid, smooth and seamless. In this respect, it is instantly reminiscent of the Panaflex philosophy, it doesnt force you to use the camera in a particular way, it just allows you to make the choices, and the system supports that.The kit also includes a hard case with custom foam. This is also in keeping with the traditions of high-end professional camera systems from people like ARRI. I have a URSA Mini Pro case by SKB that allows the camera to be packed with the EVF attached, and I like that. However, the decision to have the camera packed with the EVF and its mounting system removed makes the whole case much neater and more petite than would otherwise be possible. In fact, the Cine LF EVF kit is substantially smaller than my case for the URSA Mini Pro, despite the bigger camera. In addition to being more consistent with film camera standards, the key to making this work is how quick and easy it is to attach the EVF once the camera is out of the case. The top handle and baseplate remain on the camera when packed.The baseplate with both kits is also an excellent piece of gear. Like the URSA Mini baseplate, it offers a lot of freedom where it is mounted to the underside of the camera body, but the Cine baseplate demonstrates how much this system is designed for film-style shooting. While the URSA Mini Baseplate is a broadcast-style VCT system that works well for getting a fully built camera quickly on and off the tripod it doesnt offer much in the way of rebalancing when the camera configuration changes substantially. The URSA Cine baseplate uses the ARRI dovetail system which is now almost ubiquitous for high-end production cameras. Although the kit doesnt come with the dovetail plate, it connects easily to both the ARRI ones and third-party plates, and the locking mechanism allows it to be partly unlocked for positioning with a safety catch to fully unlock for putting the camera on and off the plate.The baseplate also has a thick and comfortable shoulder pad built-in and mounting for both 15mm LWS and 19mm Studio rods.Together all of these features of the EVF and the baseplate mean that it would be quick and easy to reconfigure the Cine LF from working with a big lens like the Angenieux 24-290mm with a 66 matte box and the viewfinder extension and in moments, have the camera with a lightweight prime lens, clamp on matte box and ready for a handheld shot. This is the sort of flexibility crews expect from a high-end cine-style camera system, and the Cine LF delivers it comfortably.The kit also comes with both PL and locking EF lens mounts which can be changed with a 3mm hex key. These two options will cover a lot of users needs, but there is also an LPL for those who want to use lenses with ARRIs new standard mount, such as the Signature Primes and Zooms and also a Hasselblad mount for using their famous large format lenses.MonitoringMonitoring options is one area where the Cine LF is in a class of its own. In addition to the EVF, there are two built-in 5 HDR touchscreen monitors, which are both large and very clear, with 1500 nits of brightness and very good contrast, and with FHD resolution matching the EVF. On the operator side is a fold-out display, and when it is folded in, there is a small status display showing all the cameras key settings. This is similar to the one on the URSA Mini Pro cameras but with a colour screen. Unlike the URSA Mini fold-out screens, this one can rotate right around so that the monitor faces out while folded back into the camera body. I can imagine this being very convenient when the operator uses the EVF with the extension mount, and the focus puller could be working directly off the 5 display folded back in. The operator side monitor can even rotate around so that the subject can see themselves, potentially useful for giving an actor a quick look at the framing, or for total overkill selfies!On the right hand side, the second 5 monitor is rugged mounted to the side of the camera body. Like the left-side flip-out screen it is also a touch screen, and the whole menu system can be accessed from both screens. Either screen can be configured for an assistant, operator or director with a wide array of options for as little or as much information as required. The right side screen also has a row of physical buttons below it to control the key features and switch between modes.The first shoot I used the Cine LF on was with Bruce Beresford (director of Best Picture Oscar winner Driving Miss Daisy), shooting scenery for his new film Overture. Bruce loved that simply standing next to the camera allowed him to clearly see what was being filmed without waiting for additional equipment to be added to the camera. I can imagine many directors becoming quite used to this feature, allowing them to get away from the video village and be near the action whenever needed.The camera body also has two independent 12G SDI outputs. In the menu system, there are separate controls for both SDI outputs, both LCDs and the EVF, so you can have any combination of LUT, overlays, frame lines, focus and exposure tools etc., on each one.For example, it would be easy to have the LUT, overlays & frame lines on in the EVF for the operator, LUT and focus tools on the right side monitor for the focus puller, false colour & histogram on the left side monitor for the Director Of Photography to check, LUT & frame lines on one SDI out for the director and a clean Log feed on the other for the DIT or any other combination. This flexibility allows the camera to function in a wide range of crew structures and shooting styles efficiently. Because of this, it would be pretty feasible to effectively drop the camera into most existing mainstream production systems with minimal adaptation around the camera.Ironically, the main thing that might obscure how much of a mainstream tool the Cine LF is might be the 12K sensor. The Cine LF, like the URSA Mini Pro 12K, the combination of the RGBW colour filter array and the BRAW recording format means that the RAW recording resolution is not tied to the area of the sensor being used the way it is for virtually every other RAW capable camera.Resolution & Recording FormatsThis might sound counter-intuitive because of how RAW has been sold to us from the start. The concept that RAW is simply taking the raw, ie. unprocessed digitised data of each pixel from the image sensor has always been a vast oversimplification that has served a useful purpose in allowing people to understand the usefulness of RAW. The only production camera Im aware of that records in this way is the Achtel 97. Even then, the files need to be converted to a more conventional format for post-production. The vast amount of data involved in doing this is truly mind-boggling.What the RAW video formats were all familiar with do is more of an approximation of this which is to use efficiencies from saving the de-Bayer process to reduce the amount of data before compression is applied and use some of that space saving to record high bit depth data for each photosite allowing it to retain more of the tonal subtleties from the sensor and therefore being able to apply minimal processing to the recording. The effect of all this is that it gives so much flexibility in post that it generally functions in practice as if you had all of the unprocessed raw data from the sensor. RED set the standard for this with their REDCODE RAW format, and most manufacturers have followed this in some form or another, such as ARRIs uncompressed ARRIRAW. With all of these formats, recording a lower resolution than the full sensor res means cropping in on the sensor. The RED ONE, for example, recorded 4K across the Super 35mm sensor, but to record 2K meant cropping down to approximately Super 16mm.Ben Allan DOP on set recently with the fxcrewBlackmagic RAW or BRAW achieves a similar result by different means and was designed with the RGBW sensor array in mind. Unlike other RAW systems where the recording is tied to the Bayer pattern of photosites, BRAW does a partial de-mosaic before the recording but still allows for all the normal RAW controls in post, such as white balance, ISO etc.One of the big advantages of this is that the recording resolution can be decoupled from the individual photosites meaning that the BMDs BRAW-capable cameras can record lower resolutions while still using the full dimensions of the sensor.While the Cine LF has the advertised 12K of photosites across the sensor, it doesnt have to record in 12K to get all the other advantages of the large sensor. In 4K, you still get the VistaVision depth of field as you would expect from something like an ARRI ALEXA LF but also the other advantages of a larger sensor that are often overlooked. Many lenses that cover the full frame of the 24mm x 36mm sensor are optimised for that image circle, so they will maximise performance on that frame and issues like sharpness and chromatic aberration will not be as good when significantly cropped in. There are also advantages to oversampling an image, including smoothness while retaining image detail and less likelihood of developing moire patterns. On that subject, the Cine LF also has a built-in OLPF filter in front of the image sensor. This Optical Low Pass Filter removes details below the resolution limit of the sensor, resulting in even less risk of moire and digital aliasing.As a 4K or 8K camera, the Cine LF excels. 4K RAW is the minimum resolution that the camera can record internally, but with the BRAW compression set to a conservative 5:1 the resulting data rate is a very manageable 81 MB/s. The equivalent resolution in ProRes HQ (although in 10-bit, 4:2:2) is 117 MB/s.For many productions, 4K from this camera will be more than adequate. The pictures are smooth and film-like and very malleable in Resolve. The workflow is easy, and you can very comfortably set the camera to 4K and pretend it is just a beautifully oversampled 4K camera. Im conscious that this sort of sensor oversampling is how the original ARRI ALEXA built its reputation with 2.8K of photosites coming down to stunning 2K recordings.Some productions will shoot 4K for the most part but then switch to 8K or 12K for VFX work, somewhat like the way big VFX films used to shoot Super 35mm, Academy or Anamorphic for main unit and switch to VistaVision for VFX shots (think Star Wars, Jurassic Park & Titanic). The beauty of this is that unlike switching to VistaVision, there are no issues with physically swapping cameras or with matching lenses or even angle of view- everything remains the same but with a quick change in the menu system, you have a 4K, 8K or 12K camera.If youre expecting an eye-poppingly sharp 12K from this camera, you may want to be adjusting the settings in Resolve because the camera is optimised for aesthetically pleasing images rather than in-you-face sharpness. While BMD may have taken a bit of a PR hit with the Super-35 12K because people were expecting a big impact 12K zing, Im glad that they have stayed the course and kept the focus on beautiful images. Resolution, image detail and sharpness are related but different issues and they are ones that every manufacturer has to make decisions about in every cameras design. That balancing act has landed in a real sweet spot with the Cine LF and the effects are most notable on faces. The images are simultaneously detailed and gentle.The other significant recording format is the 9K mode, which does a Super-35 crop. This is a great option that allows for the use of any of the vast array of beautiful modern and vintage lenses designed for the Super-35 frame. Things like the classic Zeiss Super Speeds or Panavision Primos spring to mind.In each of the recording resolutions you have the same five options for aspect ratio.Open Gate is a 32 or 1.5:1 image that uses the full dimensions of the sensor. While it isnt a common delivery format, there are a number of reasons to use this recording option. The one delivery format that does use something very close to this is IMAX and I believe this camera will prove itself to be a superb choice for IMAX capture. But aside from that, it is very useful to shoot in open gate to capture additional image above and below a widescreen frame. This concept originated with films and TV shows framing the widescreen Super 35, but without masking the frame in the camera, which is literally an open film gate. This creates a shoot off which allows for either reframing or stabilisation.The 16:9 mode uses the entire width of the sensor and crops the height to get the correct ratio. Its also worth noting that the 16:9 4K mode is 169 based on the DCI cinema width of 4096 and not UHD 4K at 3840 across and is 2304 pixels high to get the 16:9 rather than the UHD 2160. This is another reflection of the design philosophy of making a cine camera rather than a TV camera.17:9 is a full DCI cinema frame at 1.89:1 which is also not a standard delivery format but the container standard for digital cinema. This would need to be cropped down to either 2.4:1 Scope or 1.85:1 for cinema release or 16:9 for TV, and any of these frame lines can be loaded for monitoring as with the Open Gate mode.2.4:1 is the standard Scope ratio for cinema and doesnt have any shoot-off for that format. In 8K & 4K this format delivers the highest off-speed frame rates at 224 fps compared to 144 in 8K or 4K Open Gate. A fairly trivial detail, but 2.4:1 in 4K is actually the only standard delivery format that the camera offers directly without any resizing or cropping in post.The final aspect ratio is 6:5 or 1.2:1 is explicitly designed for use with anamorphic lenses. With a traditional 2x anamorphic, the 6:5 ratio closely matches the anamorphic frame on film and produces a 2.4:1 image when de-squeezed. In 12K, 8K or 4K this would require anamorphic lenses built to cover the large format frame, but in 9K 6:5 the crop perfectly mimics a 35mm anamorphic negative area, making it possible to use any traditional 35mm anamorphics in the same way as they would work on film.Anamorphic de-squeeze can be applied to any or all of the monitoring outputs and displays, including the EVF, in any of the recording formats and with any of the common anamorphic ratios 1.3x, 1.5x, 1.6x, 1.66x, 1.8x and 2x. Combined with the different recording aspect ratios this could cater for a wide range of different workflows, creative options and even special venue formats. For example, 2x anamorphic on a 2.4:1 frame would produce a 4.8:1 image that would have been a superb option for the special venue production I shot and produced for Sydneys iconic Taronga Zoo a few years ago, which would have negated the need for the three camera panoramic array we had to build at the time.The other thing worth noting about the twenty different recording resolutions is that only one matches a standard delivery format pixel for pixel. I doubt that this is any reflection on that particular format but a coincidence that it coincided with the overall logic of the format options. This logic also goes back to the idea of this as a cine camera. Like a professional film camera shooting on film negative, the idea is not to create a finished image in camera but to record a very high-quality digital neg, which is expected to have processing in post-production before creating a deliverable image.While nothing is stopping you from using this camera for fast turnaround work with minimal post production, there are so many choices that have been made in the design of the camera which are not focussed on the ability. This again comes back to the concept of a cine camera optimised to function in the ways that film crews and productions like or need to work.The Media ModuleThe combination of 12K RAW and high frame rates created another challenge in the form of very high data rates. Although BRAW is a very efficient codec BMD needed a recording solution that really wasnt met by any of the off-the-shelf options.Their solution is the Media Module which is specially designed, removable on-board storage. While it will also be possible to replace the Media Module with one that contains CFexpress slots, the Media Module M2 comes with the camera and has 8TB of extremely fast storage. Even in Open Gate, 12K at the lowest compression settings, this still allows nearly 2 hours of recording. There are very limited scenarios where it would be necessary to shoot more than that amount of footage at that quality level in a single day. In those situations, it is of course, possible to have multiple Media Modules and swap them out as you would a memory card.For most productions, though, it will be easy to comfortably get through each days shooting on the single module. There is a Media Module Dock which allows 3 Modules to be connected simultaneously to a computer, but for many users, the simpler solution will be the 10G ethernet connection on the back of the camera. Either way, downloading the days footage will happen about as fast as the receiving drive can handle as the Media Module and the ethernet will outrun most drive arrays.The PicturesThe resulting pictures are, of course, what its all about, and the pictures from the Cine LF are nothing short of stunning. It has the same silkiness that the pictures from the original 12K have but with the large format look. The larger photosites and the fact that lenses arent working as hard seem to also contribute to the fact that the pictures look sharp without harshness.That lack of harshness also contributes to the film-like look of the images. There are so many techniques to degrade digital to make it more like film, but this is digital looking like film at its best. Detailed, clean, gentle on skin tones and with a beautiful balance between latitude and contrast.While there are several cameras which this one competes with, there is really nothing on the market that is comparable in terms of the combination of functionality, workflow and look.The AVP VersionThe Blackmagic URSA Cine camera platform is the basis of multiple models with differentfeatures for the high end cinema industry. All models are built with a robust magnesiumalloy chassis and lightweight carbon fiber polycarbonate composite skin to help filmmakersmove quickly on set. Blackmagic URSA Cine Immersive is available to pre order now direct from Blackmagic Design for US$29,995. Delivery will start in late Q1 2025.Shipping Q1 2025Customers get 12GSDI out, 10G Ethernet, USB-C, XLR audio, and more.An 8-pin Lemo power connector at the back of the camera works with 24V and 12V power supplies, making it easy to use the camera with existing power supplies, batteries, and accessories. Blackmagic URSA Cine Immersive comes with a massive 250W power supply and B mount battery plate, so customers can use a wide range of high voltage batteries from manufacturers such as IDX, Blueshape, Core SWX, BEBOB, and more.Blackmagic URSA Cine Immersive comes with 8TB of high performance network storage built in, which records directly to the included Blackmagic Media Module, and can be synced to Blackmagic Cloud and DaVinci Resolve media bins in real time. This means customers can capture over 2 hours of Blackmagic RAW in 8K stereoscopic 3D immersive, and editors can work on shots from remote locations worldwide as the shoot is happening. The new Blackmagic RAW Immersive file format is designed to make it simple to work with immersive video within a post production workflow, and includes support for Blackmagic global media sync.Blackmagic RAW files store camera metadata, lens data, white balance, digital slate information and custom LUTs to ensure consistency of image on set and through post production. Blackmagic URSA Cine Immersive is the first commercial, digital film camera with ultra fast high capability Cloud Store technology built in. The high speed storage lets customers record at the highest resolutions and frame rates for hours and access their files directly over high speed 10G Ethernet. The camera also supports creating a small H.264 proxy file, in addition to the camera original media when recording. This means the small proxy file can be uploaded to Blackmagic Cloud in seconds, so media is available back at the studio in real time.Blackmagic URSA Cine Immersive Features Dual custom lenses for shooting Apple Immersive Video for Apple Vision Pro. Dual 8160 x 7200 (58.7 Megapixel) sensors for stereoscopic 3D immersive image capture. Massive 16 stops of dynamic range. Lightweight, robust camera body with industry standard connections. Generation 5 Color Science with new film curve. Each sensor supports 90 fps at 8K captured to a single Blackmagic RAW file. Includes high performance Blackmagic Media Module 8TB for recording. High speed Wi-Fi, 10G Ethernet or mobile data for network connections. Includes DaVinci Resolve Studio for post production.Apple is looking to build a community of AVP projectsSubmergedLast month, Apple debutedSubmerged, the critically acclaimed immersive short film written and directed by Academy Award-winning filmmaker Edward Berger. New episodes ofAdventureandWild Lifewill premiere in December, followed by new episodes ofBoundless,ElevatedandRed Bull: Big-Wave Surfingin 2025.Submerged BTS Note the film was not shot on the BMC, but is now available to watch on AVP
    0 Comments 0 Shares 5 Views
  • WWW.FXGUIDE.COM
    NVIDIAs Simon Yuan: Facing the Future of AI
    In this Fxpodcast episode, we explore a topic that captures the imagination and attention of creatives worldwide: building your own generative AI tools and pipelines. Simon Yuen is director of graphics and AI at NVIDIA where he leads the digital human efforts to develop new character technology and deep learning-based solutions that allow new and more efficient ways of creating high-quality digital characters. Before NVIDIA, Simon spent more than 21 years in the visual effects industry, in both the art and technology sides of the problem at many studios, including Method Studios, Digital Domain, Sony Pictures Imageworks, DreamWorks, Blizzard Entertainment, and others, building teams and technologies that push the envelope of photorealistic digital character creation.Generative AI.This fxpodcast is not sponsored, but is based on research done for the new Field Guide to Generative AI. fxguides Mike Seymour was commissioned by NVIDIA to unpack the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The Field Guide is free and can be downloaded here: Field Guide to Generative AI. In M&E, generative AI has proven itself a powerful tool for boosting productivity and creative exploration. But it is not a magic button that does everything. Its a companion, not a replacement. AI lacks the empathy, cultural intuition, and nuanced understanding of a storys uniqueness that only humans bring to the table. But when generative AI is paired with VFX artists and TDs, it can accelerate pipelines and unlock new creative opportunities.Digital Human powered by NVIDIAThe Core of Generative AI: Foundation Models & NIMs.NVIDIA Inference Microservices (NIMs) and foundation models are the building blocks of alot of modern AI workflows, and they are at the heart of many new generative AI solutions.Foundation models are large-scale, pre-trained neural networks that can tackle broad categories of problems. Think of them as AI generalists, adaptable to specific tasks through fine-tuning with additional data. For example, you might start with a foundation model capable of understanding natural language (LLM) and fine-tune it to craft a conversational agent that your facility can use to help on board new employees.While building these models from scratch is resource-intensive and time-consuming, fine-tuning them for your specific application is relatively straightforwardand NVIDIA has made this process quite accessible.NVIDIA NIMs.NIMS or microservices simplify the deployment of foundation models. Whether on the cloud, in a data center, or even on the desktop. NIMs streamline the process but also ensuring data security. They make it easy to create tailored generative AI solutions for facilitys or project needs. For instance, NVIDIAs latest OpenUSD NIMs allow developers to integrate generative AI copilots into USD workflows, enhancing efficiency in 3D content creation.JamesBringing Digital Humans to Life with NVIDIA ACEOne of the most interesting applications of NIMs is in crafting lifelike digital humans. NVIDIAs ACE (Avatar Cloud Engine) exemplifies this capability. With ACE, developers and TDs can design interactive digital humans and avatars that respond in real-time with authentic animations, speech, and emotions.A standout example is James, a virtual assistant powered by NVIDIA ACE. He is an interactive digital human, a communications tool powered by a selected knowledge base, ACE and animated by NVIDIAs Audio2Face. James showcases how generative AI and digital human technologies converge, providing tools for telepresence, interactive storytelling, or even live character performances. This is more than just a visual upgradeits a way to enhance emotional connections in digital media.Generative AI: Empowering Creativity, Not Replacing ItAs we as an industry adopt and explore these tools, its essential to keep a balanced perspective. Generative AI isnt here to replace human creativity we need to use it to amplify it. Ai can enable teams to iterate faster, experiment, and focus on the storytelling that truly resonate with audiences. Central to this is respecting artists rights, having providence of training data and maintaining data security.From fine-tuning a foundation model to integrating NIM-powered workflows, building your own generative AI workflow involves leveraging technology to empower your project. With tools like NVIDIAs foundation models and ACE, the possibilities are immense, but the responsibility to use them thoughtfully is equally crucial.
    0 Comments 0 Shares 33 Views
  • WWW.FXGUIDE.COM
    Zap Andersson: Exploring the Intersection of AI and Rendering
    Hkan Zap Andersson is a senior technical 3D expert and a widely recognized name in the world of rendering and shader development. Zap has long been at the forefront of technical innovation in computer graphics and VFX.Known for his contributions to tools like OSL and MaterialX at Autodesk, Zap has recently ventured into the rapidly evolving domain of AI video tools, leveraging Googles powerful Notebook LM to push the boundaries of creative storytelling. His exploration resulted in UNREAL MYSTERIES, a bizarre series designed both to challenge his skills and test the capabilities of these new AI technologies.Zap joins us on the fxpodcast to delve into his creative process, share insights on the tools he used, and discuss the lessons he learned from working with cutting-edge AI systems. Below, youll find an in-depth making of breakdown that details how Zap combined his expertise with AI-powered workflows. And because no experiment is complete without a few surprises, weve included an AI blooper reel at the bottom of this story to highlight the quirks and challenges of working with this still-maturing technology.Listen to this weeks fxpodcast as we unpack the fascinating and odd world of artistry, technology, and innovation that is AI video content.Zap commented on this making of video that, this one gets INSANELY meta, the hosts get crazily introspective and its quite chilling. quite self awareIn the podcast the guys mention a series of Video tools, here is a set of links to those programs, (your mileage may vary), enjoy:MiniMax (although maybe its actual name is Hailuo, nobody truly knows):Kling:Luma Labs Dream Machine:RunwayML Gen-3: https://app.runwayml.com/dashboardUpscaling:Krea:Avatars:Heygen:Voices:NotebookLMElevenlabsMusic & Sound:Music: SumoSound Effects: ElevenlabsGoof RealZaps backgroundZap began his journey with a degree in Electronics but has been immersed in programming throughout his life. His first encounter with a computer dates back to 1979, working with an HP2000E mainframe, followed by the Swedish ABC80 computer, which he enthusiastically modified by building a custom graphics card and developing a suite of commercial games. For many years, Zap worked in the CAD industry, specializing in mechanical design software. However, his true passion has always been 3D graphics and rendering.Pursuing this interest, he developed his own ray tracer and 3D modeling software during his spare time. Zaps career took a decisive turn when he started creating advanced shaders for Mental Images, which NVIDIA later acquired. Today, he is a part of the Autodesk 3ds Max Rendering team, focusing on technologies such as OSL, shaders, MaterialX, and other cutting-edge rendering tools.Zaps expertise includes shader development, rendering algorithms, UI design, and fun experimental communication skills, making him a versatile and highly skilled professional in 3D graphics and rendering and a good friend of fxguide and the podcast.Zap on social:
    0 Comments 0 Shares 47 Views
  • WWW.FXGUIDE.COM
    Adobe and GenAI
    In this weeks fxpodcast, we sit down with Alexandra Castin, head of Adobes Firefly generative AI team, to discuss the evolution of generative AI, Adobes unique approach to ethical content creation, and the groundbreaking work behind Firefly.This podcast is not sponsored, but is based on research done for thenew Field Guide to Generative AI. fxguides Mike Seymour was commissioned by NVIDIA to unpack the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The Field Guide is free and can be downloaded here: Field Guide to Generative AI. From GANs to Multimodal ModelsAs you will hear in the fxpodcast, Adobes journey with generative AI began in 2020, when Adobe introduced Neural Filters in Photoshop. At the time, the focus was on using GANs (Generative Adversarial Networks) to generate new pixels for creative edits. Today, Adobes scope has expanded dramatically to include Diffusion Models, Transformers, and cutting-edge architectures.For me, the core principle of generative AI hasnt changedits about the computer understanding user intent and synthesizing responses from training data, Alexandra explains. The models have evolved to not only understand and generate text but also create and enhance images, videos, audio, and even 3D assets.Adobe is uniquely positioned in this space, as its product portfolio spans nearly every creative medium. With Firefly, theyve embraced multimodal generative AI to create tools that cater to text, images, audio, video, and beyond.All these images are generated with Adobe Firefly from text alone, with no other input.FireflyFirefly is Adobes flagship generative AI platform, now integrated into industry-leading tools like Photoshop, Illustrator, and Premiere Pro. According to Alexandra, Fireflys strength lies in its training data: At its core is a set of high-quality data we have the right to train on. Thats what sets Firefly apartits both powerful and safe for commercial use.One standout feature is Photoshops Generative Fill, which Alexandra describes as a co-pilot for creatives. Users can guide Photoshop with text prompts, allowing Firefly to generate precise visual results. The technology has democratized generative AI, making it accessible and practical for VFX professionals and enthusiasts alike.Ensuring Ethical AIAdobe has been a staple of the creative community for over four decades, and with Firefly, theyve prioritized respecting artists rights and intellectual property. Alexandra points to Adobes commitment to clean training material as foundational to Fireflys strategy.Weve implemented guide rails to guarantee that Firefly wont generate recognizable characters, trademarks, or logos, she says. This safeguard ensures that users work remains free from unintentional infringementa critical consideration in the commercial space.The C2PA Initiative: Building Trust in MediaOne of Adobes most significant contributions to the generative AI landscape is its leadership in the Coalition for Content Provenance and Authenticity (C2PA). Launched in 2018 from Adobe Research, the initiative addresses the growing concern around misinformation and content authenticity.Think of it like a nutrition label for digital media, Alexandra explains. The goal is to provide transparency about how a piece of content was created, so users can make informed decisions about what theyre consuming.The initiative has attracted over 3,000 organizations, including camera manufacturers, media companies, and AI model creators. By embedding content credentials into outputs, the C2PA aims to establish a universal standard for verifying authenticitya crucial step as generative content continues to explode.Looking AheadAs the generative AI landscape evolves, Firefly represents Adobes commitment to balancing innovation with ethical responsibility. By building tools that empower creators while protecting intellectual property, Adobe is aiming to build a future where generative AI becomes an indispensable part of creative workflows.Join us on this weeks fxpodcast as we dive deeper with Alexandra Castin into the future of generative AI, Adobes strategic plans, and the lessons learned along the way.
    0 Comments 0 Shares 74 Views
  • WWW.FXGUIDE.COM
    VFXShow 289: HERE
    This week, the team reviews the film HERE by director Robert Zemeckis. Earlier, we spoke to visual effects supervisor Kevin Baillie for the FXpodcast. In that earlier FXPodcast, Kevin discussed the innovative approaches used on set and the work of Metaphysics on de-aging. Starring Tom Hanks, Robin Wright, Paul Bettany, and Kelly Reilly, Here is a poignant exploration of love, loss, and the passage of time.The filmmaking techniques behind this film are undeniably groundbreaking, but on this weeks episode of The VFX Show, the panel finds itself deeply divided over the narrative and plot. One of our hosts, in particular, holds a strikingly strong opinion, sparking a lively debate that sets this discussion apart from most of our other shows. Few films have polarized the panel quite like this one. Dont miss the spirited conversation on podcast.Please note: This podcast was recorded before the interview with Kevin Ballie (fxpodcast).The Suburban Dads this week are:Matt Wallin * @mattwallin www.mattwallin.com.Follow Matt on Mastodon: @[emailprotected]Jason Diamond @jasondiamond www.thediamondbros.comMike Seymour @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments 0 Shares 115 Views
  • WWW.FXGUIDE.COM
    Generative AI in media and entertainment
    SimulonIn this new Field Guide to Generative AI, fxguides Mike Seymour, working with NVIDIA, unpacks the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The field guide draws on interviews with experts atPlus, expertise from visual effects researchers at Wt FX & Pixar.This comprehensive guide is a valuable resource for creatives, technologists, and producers looking to harness the transformative power of AI in a respectful and appropriate fashion.Generative AI in Media and Entertainment, a New Creative Era: Field GuideClick here to download the field guide (free).Generative AI has become one of the most transformative technologies in media and entertainment, offering tools that dont merely enhance workflows but fundamentally change how creative professionals approach their craft. This class of AI, capable of creating entirely new content from images and videos to scripts and 3D assetsrepresents a paradigm shift in storytelling and production.As the field guide notes, this revolution stems from the nexus of new machine learning approaches, foundational models, and advanced NVIDIA accelerated computing, all combined with impressive advances in neural networks and data science.NVIDIAFrom enhancement AI to creation GenAIWhile traditional AI, such as Pixars machine learning denoiser in RenderMan, has been used to optimize production pipelines, generative AI takes a step further by creating original outputs. Dylan Sisson of Pixar notes that their denoiser has transformed our entire production pipeline and was first used on Toy Story 4, touching every pixel you see in our films.However, generative AIs ability to infer new results from vast data sets opens doors to new innovations, building and expanding peoples empathy and skills. Naturally it also has raised concerns, about artists rights, providence of training data and possible job losses as production pipelines incorporate this new technology. The challenge is to ethically incorporate these new technologies and the field guide aims to show companies that have been doing just that.RunwayBreakthrough applicationsGenerative models, including GANs (Generative Adversarial Networks), diffusion-based approaches, and transformers, underpin these advancements in generative AI. These technologies are not well understood by many producers and clients, yet companies that dont explore how to use them could well be at an enormous disadvantage.Generative AI tools like Runway Gen-3 are redefining how cinematic videos are created, offering functionalities such as text-to-video and image-to-video generation with advanced camera controls. From the beginning, we built Gen-3 with the idea of embedding knowledge of those words in the way the model was trained, explains Cristbal Valenzuela, CEO of Runway. This allows directors and artists to guide outputs with industry-specific terms like 50mm lens or tracking shot.Similarly, Adobe Firefly integrates generative AI across its ecosystem, allowing users to tell Photoshop what they want and having it comply through generative fill capabilities. Fireflys ethical training practices ensure that it only uses datasets that are licensed or within legal frameworks, guaranteeing safety for commercial use.New companies like Simulon are also leveraging generative AI to streamline 3D integration and visual effects workflows. According to Simulon co-founder Divesh Naidoo, Were solving a fragmented, multi-skill/multi-tool workflow that is currently very painful, with a steep learning curve, and streamlining it into one cohesive experience. By reducing hours of work to minutes, Simulon allows for rapid integration of CGI into handheld mobile footage, enhancing creative agility for smaller teams.BriaEthical frameworks and creative controlThe rapid adoption of generative AI has raised critical concerns around ethics, intellectual property, and creative control. The industry has made strides in addressing these issues. Adobe Firefly and Getty Images stand out for their transparent practices. Rather than ask if one has the rights to use a GenAI image, the better question is, can I use these images commercially, and what level of legal protection are you offering me if I do? asks Gettys Grant Frarhall. Getty provides full legal indemnification for its customers, ensuring ethical use of its proprietary training sets.Synthesia, which creates AI-driven video presenters, has similarly embedded an ethical AI framework into its operations, adhering to the ISO Standard 42001. Co-founder Alexandru Voica emphasizes, We use generative AI to create these avatars the diffusion model adjusts the avatars performance, the facial movements, the lip sync, and eyebrowseverything to do with the face muscles. This balance of automation and user control ensures that artists remain at the center of the creative process.Wonder StudiosTraining data and provenanceThe quality and source of training data remain pivotal. As noted in the field guide, It can sometimes be wrongly assumed that in every instance, more data is goodany data, just more of it. Actually, there is a real skill in curating training data. Companies like NVIDIA and Adobe use carefully curated datasets to mitigate bias and ensure accurate results. For instance, NVIDIAs Omniverse Replicator generates synthetic data to simulate real-world environments, offering physically accurate 3D objects with accurate physical properties for training AI systems, and it fully trained appropriately.This attention to data provenance extends to protecting artists rights. Getty Images compensates contributors whose work is included in training sets, ensuring ethical collaboration between creators and AI developers.BriaExpanding possibilitiesGenerative AI is not a one-button-press solution but a dynamic toolset that empowers artists to innovate while retaining creative control. As highlighted in the guide, Empathy cannot be replaced; knowing and understanding the zeitgeist or navigating the subtle cultural and social dynamics of our times cannot be gathered from just training data. These things come from people.However, when used responsibly, generative AI accelerates production timelines, democratizes access to high-quality tools, and inspires new artistic directions. Tools like Wonder Studio automate animation workflows while preserving user control, and platforms like Shutterstocks 3D asset generators provide adaptive, ethically trained models for creative professionals.Adobe FireflyThe future of generative AIThe industry is just beginning to explore the full potential of generative AI. Companies like NVIDIA are leading the charge with solutions like the Avatar Cloud Engine (ACE), which integrates tools for real-time digital human generation. At the heart of ACE is a set of orchestrated NIM Microservices that work together, explains Simon Yuen, NVIDIAs Senior Director of Digital Human Technology. These tools enable the creation of lifelike avatars and interactive characters that can transform entertainment, education, and beyond.As generative AI continues to evolve, it offers immense promise for creators while raising essential questions about ethics and rights. With careful integration and a commitment to transparency, the technology has the potential to redefine the boundaries of creativity in media and entertainment.
    0 Comments 0 Shares 116 Views
  • WWW.FXGUIDE.COM
    A deep dive into the filmmaking of Here with Kevin Baillie
    The film Heretakes place in a single living room, with a static camera, but the film is anything but simple. The film remains faithful to the original graphic novel by Richard McGuire on which it is based, Tom Hanks and Robin Wright star in a tale of love, loss, and life, along with Paul Bettany and Kelly Reilly.Robert Zemeckis directing the filmRobert Zemeckis directed the film. The cinematography was by Don Burgess, and every shot in the film is a VFX shot. On the fxpodcast, VFX supervisor and second unit director Kevin Baillie discusses the complex challenges of filming, editing, and particularly de-aging the well-known cast members to play their characters throughout their adult lifespans.A monitor showing the identity detection that went into making sure that each actors younger real-time likeness was swapped onto them, and only them.DeAgingGiven the quantity and emotional nature of the performances, and the vast range of years involved, it would have been impossible to use traditional CGI methods and equally too hard to do with traditional makeup. The creative team decided that AI had just advanced enough to serve as a VFX tool, and its use was crucial to getting the film greenlit. Baillie invited Metaphycis to do a screen test for the project in 2022, recreating a young Tom Hanks, reminiscent of his appearance in Big, while maintaining the emotional integrity of his contemporary performance. A team of artists used custom neural networks to test Tom Hanks D-Age to His 20s. That gave the studio and our filmmaking team confidence that the film could be made. Interestingly, as Baillie discusses in the fxpodcast, body doubles were also tested but did not work nearly as well as the original actors.Tests of face swapping by Metaphysics. Early test of methods for de-aging Tom based on various training datasets:https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_ageEvolutionOptions.mp4Neural render output test clip:https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_WIP.mp4Final comp test clip: (The result of test for de-aging Tom that helped green-light the film)https://www.fxguide.com/wp-content/uploads/2024/11/tomTest_preproduction_Final.mp4While the neural network models used for the outputs generated remarkable photoreal results, but they still required skilled compositing to match, especially on dramatic head turns. Metaphysic artists enhanced to AI to hold up to the films cinematic 4K standards. Metaphysics also developed new tools for actor eyeline control and other key crafting techniques. Additionally, multiple models were trained for each actor to meet the diverse needs of the film; Hanks is portrayed at five different ages, Wright at four ages, and Bettany and Riley at two ages each. Achieving this through traditional computer graphics techniques involving 3D modeling, rendering, and facial capture, would have been impossible given the scale and quality required forHere and the budget for so much on-screen VFX. The film has over 53 minutes of complete face replacement work, done primarily by Metaphysics, led by Metaphysics VFX Supervisor, Jo Plaete. Metaphysics proprietary process involves training a neural network model on a reference input, in this case, footage and images of a younger Hanks, with artist refinement of the results until the model is ready for production. From there, an actor or performer can drive the model, both live on set and in a higher quality version in Post. The results are exceptional and well beyond what traditional approaches have achieved.On Set live preview: Tom de-aged as visualized LIVE on set (right image) vs the raw camera feed (left image)For principal photography, the team needed a way to ensure that the age of the actors body motion matched the scripted age of their on-screen characters. To help solve this, the team deployed a real-time face-swapping pipeline in parallel on set, one monitor showing the raw camera feed and the other the actors visualized in their 20s (with about a six-frame delay). These visuals acted as a tool for the director and the actors to craft performances. As you can hear in the podcast it also allowed a lot more collaboration with other departments such as hair and makeup, and costume.The final result was a mix of multiple AI neural renders and classic nuke compositing. The result is a progression of the actors through their years, designed to be invisible to audiences.Robin with old-age makeup, compared with synthesized images of her at her older age, which were used to improve the makeup using similar methods to the de-aging done in the rest of the filmIn addition to de-aging, similar approaches were used to improve the elaborate old-age prosthetics worn by Robin Wright at the end of the film. , This allowed enhanced skin translucency and fine wrinkles, etc. De-aging makeup is extremely difficult and often characterised as the hardest special effects makeup to attempt. Metaphysics has done an exceptional job combining actual makeup with digital makeup to produce a photorealistic de-aging.In addition to the visuals, Respeecher and Skywalker Sound also deaged the actors voices, as Baillie discusses in the fxpodcast.Three setsThe filming was done primarily on three sets. There were two identical copies of the room to allow one to be filmed while the other was being dressed for the correct era. Additionally, exterior scenes from before the house was built were filmed on a separate third soundstage.Graphic PanelsGraphic panels serve as a bridge across millions of years from one notionally static perspective. The graphic panels that transitioned between eras were deceivingly tricky, with multiple scenes often playing on-screen simultaneously. As Baillie explains on the podcast, they had to reinvent editorial count sheets and use a special in-house comp team with After Effects as part of the editorial process. LED WallAn LED wall with content from the Unreal Engine was used outside the primary window. As some background needed to be replaced, the team also used the static camera to shoot helpful motion-control style matte passes (the disco passes).The Disco passesFor the imagery in the background Baille knew that it would take a huge amount of effort to add the fine detail needed in the Unreal Engine. He liked the UE output but we wanted a lot of fine detail for the 4K master. Once the environment artists had made their key creative choices, one of the boutique studios and the small in-house team used an AI power tool called Magnific to up-res the images. Magnific was was built by Javi Lopez (@javilopen) and Emilio Nicolas (@emailnicolas), two indie entrepreneurs, and it uses AI to infer additional detail. The advanced AI upscaler & enhancer effectively reimagine much of the details in the image, guided by a prompt and parameters.Before Left After RightMagnific allowed for an immense amount of high-frequency detail that would have been very time-consuming to add traditionally.Here, it has not done exceptionally well at the box office, ( and as you will hear in the next fxguide VFXShow podcast, not everyone liked the film), but there is no doubt that the craft of filmmaking and the technological advances are dramatic. Regardless of any plot criticisms, the film stands as a testament to technical excellence and innovation in the field. Notably, the production respected data provenance in its use of AI. Rather than replacing VFX artists, AI was used to complement their skills, empowering an on-set and post-production team to bring the directors vision to life. While advances in AI can be concerning, in the hands of dedicated filmmakers, these tools offer new dimensions in storytelling, expanding whats creatively possible.
    0 Comments 0 Shares 102 Views
  • WWW.FXGUIDE.COM
    Agatha All Along with Digital Domain
    Agatha All Along, helmed by Jac Schaeffer, continues Marvel Studios venture into episodic television, this time delving deeper into the mystique of Agatha Harkness, a fan-favourite character portrayed by Kathryn Hahn. This highly anticipated Disney+ miniseries, serving as a direct spin-off from WandaVision (2021), is Marvels eleventh television series within the MCU and expands the story of magic and intrigue that WandaVision introduced.Filming took place in early 2023 at Trilith Studios in Atlanta and on location in Los Angeles, marking a return for many of the original cast and crew from WandaVision. The production drew on its predecessors visual style but expanded with a rich, nuanced aesthetic that emphasises the eerie allure of Agathas character. By May 2024, Marvel announced the official title, Agatha All Along, a nod to the beloved song from WandaVision that highlighted Agathas mischievous involvement in the original series. The cast features an ensemble, including Joe Locke, Debra Jo Rupp, Aubrey Plaza, Sasheer Zamata, Ali Ahn, Okwui Okpokwasili, and Patti LuPone, all of whom bring fresh energy to Agathas world. Schaeffers dual role as showrunner and lead director allows for a cohesive vision that builds on the MCUs expanding exploration of side characters. After Loki, Agatha All Along has been one of the more successful spin-offs, with the audience number actually growing during the season as the story progressed. Agatha All Along stands out for its dedication to character-driven narratives, enhanced by its impressive technical VFX work and the unique blend of visuals.Agatha All Along picks up three years after the dramatic events of WandaVision, with Agatha Harkness breaking free from the hex that imprisoned her in Westview, New Jersey. Devoid of her formidable powers, Agatha finds an unlikely ally in a rebellious goth teen who seeks to conquer the legendary Witches Road, a series of mystical trials said to challenge even the most powerful sorcerers. This new miniseries is a mix of dark fantasy and supernatural adventure. It reintroduces Agatha as she grapples with the challenge of surviving without her magic. Together with her young protg, Agatha begins to build a new coven, uniting a diverse group of young witches, each with distinct backgrounds and latent abilities. Their quest to overcome the Witches Roads formidable obstacles becomes not only a journey of survival but one of rediscovering ancient magic, which, in turn, requires some old-school VFX.When approaching the visual effects in Agatha All Along, the team at Digital Domain once again highlighted their long history of VFX, adapting to the unique, old-school requirements set forth by production. Under the creative guidance of VFX Supervisor Michael Melchiorre and Production VFX Supervisor Kelly Port, the series visuals present a compelling marriage between nostalgia and cutting-edge VFX. Whats remarkable is the productions call for a 2D compositing approach that evokes the style of classic films. The decision to use traditional compositing not only serves to ground the effects but also gives the entire series a unique texturea rare departure in a modern era dominated by fully rendered 3D environments. Each beam of magic, carefully crafted with tesla coil footage and practical elements in Nuke, gives the witches their distinctive looks while adding a sense of raw, visceral energy. For the broom chase, Digital Domain took inspiration from the high-speed speeder-bike scenes in Return of the Jedi. Working from extensive previs by Matt McClurgs team, the artists skillfully blended real set captures with digital extensions to maintain the illusion of depth and motion. The compositors meticulous worklayering up to ten plates per shotensured each broom-riding witch interacted correctly with the environment. The ambitious sequence, demonstrating technical finesse and a dedication to immersive storytelling. In the death and ghost sequences, Digital Domain took on some of the series most challenging moments. From Agathas decaying body to her rebirth as a spectral entity, these scenes required a balance of CG and 2D compositing that maintained Kathryn Hahns performance nuances while delivering a haunting aesthetic. Drawing from 80s inspirations like Ghostbusters, compositors carefully retimed elements of Hahns costume and hair, slowing them to achieve the ethereal look mandated by the production.As Agatha All Along unfolds, the visuals reveal not only Digital Domains adaptability but a nod to the history of visual effects, an homage to both their own legacy and classic cinema. By tackling the limitations of a stripped-down toolkit with ingenuity, Digital Domain enriched the story with a fresh yet nostalgically layered visuals. Agatha All Along stands out for its blend of good storytelling and layered character development. Each trial on the Witches Road reveals more about Agatha and her evolving bond with her young ally, adding new depth to her character and expanding the lore of the MCU. Fans of WandaVision will find much to love here, as Agathas story unfolds with complex VFX and a touch of wicked humor.
    0 Comments 0 Shares 105 Views
  • WWW.FXGUIDE.COM
    VFXShow 288: The Penguin
    This week, the team discusses the visual effects of the HBOs limited series The Penguin from The Batman by director Matt Reeves.The Penguin is a miniseries developed by Lauren LeFranc for HBO. It is based on the DC Comics character of the same name. The series is a spin-off from The Batman (2022) and explores Oz Cobbs rise to power in Gotham Citys criminal underworld immediately after the events of that film.Colin Farrell stars as Oz, reprising his role from The Batman. He is joined by Cristin Milioti, Rhenzy Feliz, Deirdre OConnell, Clancy Brown, Carmen Ejogo, and Michael Zegen. Join the team as they discuss the complex plot, effects and visual language of this highly successful mini-series.The Penguin premiered on HBO on September 19, 2024, with eight episodes. The series has received critical acclaim for its performances, writing, direction, tone, and production values.The VFX are made by: Accenture Song, Anibrain, FixFX, FrostFX, Lekker VFX and PixomondoThe Production VFX Supervisor was Johnny Han, who also served as 2nd Unit Director. Johnny Han is a twice Emmy nominated and Oscar shortlisted artist and supervisor.The Supervillains this week are:Matt Bane Wallin * @mattwallin www.mattwallin.com.Follow Matt on Mastodon: @[emailprotected]Jason Two Face Diamond @jasondiamond www.thediamondbros.comMike Mr Freeze Seymour @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments 0 Shares 102 Views
  • WWW.FXGUIDE.COM
    Adeles World Record LED Concert Experience
    Adeles recent concert residency, Adele in Munich, wasnt just a live performance; it was a groundbreaking display of technology and design. Held at the custom-built Adele Arena at Munich Messe, this ten-date residency captivated audiences with both the music and an unprecedented visual experience. We spoke to Emily Malone, Head of Live Events and Peter Kirkup, Innovation Director at Disguise, about how it was done.Malone and Kirkup explained how their respective teams collaborated closely with Adeles creative directors to ensure a seamless blend of visuals, music, and live performance.Malone explained the process: The aim was not to go out and set a world record it was to build an incredible experience that allowed Adeles fans to experience the concert in quite a unique way. We wanted to make the visuals feel as intimate and immersive as Adeles voice. To achieve this, the team used a combination of custom-engineered hardware and Disguises proprietary software, ensuring the visuals felt like an extension of Adeles performance rather than a distraction from it.The Adele Arena wasnt your typical concert venue. Purpose-built for Adeles residency, the arena included a massive outdoor stage setup designed to accommodate one of the worlds largest LED media walls. The towering display, which dominated the arenas backdrop, set a new benchmark for outdoor live visuals, allowing Adeles artistry to be amplified on a scale rarely seen in live music.Adeles recent Munich residency played host to more than 730,000 fans from all over the world, reportedly the highest turnout for any concert residency outside Las Vegas. We are proud to have played an essential role in making these concerts such an immersive, personal and unforgettable experience for Adeles fans, says Emily Malone, Head of Live Events at Disguise.Thanks to Disguise, Adele played to the crowd with a curved LED wall spanning 244 meters, approximately the length of two American football fields. The LED installation was covered with 4,625 square meters of ROE Carbon 5 Mark II (CB5 MKII) in both concave and convex configurations. As a result, it earned the new Guinness World Record title for the Largest Continuous Outdoor LED Screen. The lightweight and durable design of the CB5 MKII made the installation possible. At the same time, its 6000-nit brightness and efficient heat dissipation ensured brilliant, vibrant visuals throughout the whole duration of the outdoor performance.With over 20 years of experience powering live productions, Disguise technology has done an enormous variety of outdoor performances and concerts. For Adeles Munich residency, Disguises team implemented advanced weatherproofing measures and redundant power systems to ensure reliability. Using Disguises real-time rendering technology, the team was able to adapt and tweak visuals instantly, even during Adeles live performances, ensuring a truly immersive experience for the audience.Adele in Munichtook place over 10 nights in a bespoke, 80,000-capacity stadium. This major event called for an epic stage production. Having supported Adeles live shows before, Disguise helped create, sync, and display visuals on a 4,160-square-metre LED wall assembled to look like a strip of folding film.Kirkup was part of the early consultation for the project, especially regarding the feasibility of the project and the deliverability of the original idea for the vast LED screens. There was a lot of discussion about pixel pitch and fidelity, especially as there was an additional smaller screen right behind the central area where Adele would stand. The question was raised if this should be the same LED products as the vast main screen or something more dense; in the end, they landed on using the same LEDs for the best contiguous audience experience, he explained.The Munich residency was unique; there was no template for the team, but their technology scaled to the task. The actual implementation went incredibly smoothly, explains Malone. There was so much pre-production, every detail was thought about so much by all the collaborators on the project. As there was so little time to get the stage and LEDs built on site, it was all extensively pre-tested before the final shipping. It would be so hard to fault find on location- I mean, it took me 15 minutes just to walk to the other end of the LED wall, and lord forbid if you forgot your radio or that one cable and you had to walk back for anything!The 2-hour concerts generated 530 million euros for the city of Munich throughout the ten shows from the concert attendance, with each night having a stadium capacity of 80,000 fans. 8 x Disguise GX3 servers were used to drive the LED wall, and 18 x Disguise SDI VFC cards were required. There was a total pixel count of 37,425,856 being driven, split over 3 actors.Actor 1: Left 7748 x 1560Actor 2: Centre 2912 x 936 + scrolls and infill 5720 x 1568 + lift 3744 x 416Actor 3: Right 7748 x 1560Disguises Designer software was used to preview visuals before going on stage and to control them during the show, which was sequenced to timecode. Given the nature of the live event, there was a main and backup system for full 1:1 redundancy. The footage from Adele singing was shot 4K with Grass Valley cameras. With a live performance, there is a degree of unpredictability, with live shows, says Malone. There was a tight set list of songs which did not change from night to night, all triggered by timecode, but structures were built in so that Adele wanted to speak to the audience or do something special on any night, they could get her closeup face on screen very quickly. Additionally, there is a major requirement to be able to take over the screens at a moments notice for safety messages should something completely unexpected happen. In reality, the screens serve many functions; it is there so the audience can see the artist they are there for, but also it is their safety, for the venue, for the suppliers.This was the first time Adele has played mainland Europe since 2016, and the 36-year-old London singer signed off her final Munich show, warning fans, I will not see you for a long time. Adele, who last released the album 30 in 2021, is set to conclude her shows at The Colosseum at Caesars Palace this month and then is not expected to tour again soon. Both the Vegas and Munich residencies give the singer a high level of creative and logistical control compared with normal live touring a luxury option only available to entertainments biggest stars. Such residences also allow for the investment in such bespoke technical installations. The Munich LED stage would simply not be viable to take on a world tour, both due to its size and how it was crafted explicitly for the German location.Disguise is at the heart of this new era of visual outdoor experiences, where one powerful integrated system of software, hardware and services can help create the next dimension of real-time concert. They have partnered with the biggest entertainment brands and companies in the world such as Disney, Snapchat, Netflix, ESPN, U2 at the Sphere, the Burj Khalifa, and Beyonce.Thanks to the massive technical team; for Adeles fans, Adele in Munich was more than a concertit was an immersive experience, seamlessly blending state-of-the-art visuals with world-class music.
    0 Comments 0 Shares 98 Views
  • WWW.FXGUIDE.COM
    Slow Horses
    Slow Horses is an Emmy-nominated funny espionage drama that follows a team of British intelligence agents who serve in a dumping ground department of MI5 due to their career-ending mistakes, nicknamed Slow Horses (from Slough House). The team is led by their brilliant but cantankerous and notorious Jackson Lamb (Academy Award winner Gary Oldman). In Season 4, the team navigate the espionage worlds smoke and mirrors to defend River Cartwrights (Jack Lowden) father from sinister forces.Academy Award winner Gary Oldman as Jackson LambSeason 4ofSlow Horsespremiered on September 4, 2024 onApple TV+. It is also subtitled Spook Street after the fourth book of the same name. Union VFX handled the majority of the visual effects, and the VFX Supervisor was Tim Barter (Poor Things).In the new season, the key VFX sequences beyond clean-up work and stunt work included the London explosion, the Paris chteau fire, the explosion in the canal, and the destruction of the West Acres shopping mall. Union had done some work on season 3, and they were happy to take an even more prominent role in the new season. In season 4, they had approximately 190 shots and 11 assets. For season 3, they worked on approximately 200 shots and 20 assets but were not the lead VFX house.https://www.fxguide.com/wp-content/uploads/2024/10/Slow-Horses--Season-4Trailer.mp4Union VFX is an independent, BAFTA-winning, visual effects studio founded in 2008, based in Soho, with a sister company in Montral. Union has established an strong reputation for seamless invisible effects on a wide range of projects building strong creative relationships with really interesting directors including Danny Boyle, Susanne Bier, Martin McDonagh, Marjane Satrapi, Sam Mendes, Fernando Meirelles & Yorgos Lanthimos.The Union VFX team used a mixture of practical effects, digital compositing, and digital doubles/face replacement to achieve the desired VFX for the show. Interestingly, at one point, a hand grenade had to be tossed in a canal after it was placed in Rivers hoodie. Not only was the water added digitally for the normal VFX reasons that one might imagine, such as an explosion going off near the hero actors, but also water in the canal isnt actually fit to be splashed on anyone, it just isnt clean water, so the team did the water explosion fully digitally. Simiarly, the shopping mall at Westacres, which was meant to have 214 retail stores, 32 restaurants and 8 cinema screens was not actually blown up, but in fact the location wasnt even in London and the background was all done with digital matte paintings to look like a real Westfield Shopping Centre hence the fictional equivalents similar name.After the suicide bomb goes off at the Westacres shopping mall in London, carried out by Robert Winters. After Winters publishes a video confessing to the attack, a police force breaks into his flat, but three of the MI5 dogs are killed by a booby trap. This explosion was genuinely shot in a backlot, and then integrating that into the plate photography of a block of flats.The Park which is the hub of the MI5 operations has been seen before since season one but each season it is slightly different causing Tim Barter to analyse all the previous seasons work to try and build a conceptual model of what The Park building would actually look like, so that they could have continuity in season four with various interior, exterior and complex car park shots.In season four there was a requirement to do probably one night and five daytime aerial views of it from different angles, explains Tim Barter. We got to create a whole sections of the Park that have never been created before, so I was there going through the previous seasons, looking at all the peripheral live action shots that were all shot in very different actual locations. Its like, there is this section where River comes out of the underground car park, and then he gets into this little door over here, which then goes through here on the side of this. And all the time, Im trying to retroactively creating that architecture (digital) of the Park, to be faithful to the previous seasons.After the Harkness breaks into Mollys apartment and forces her to give him her security credentials, he tracks the Rivers convoy of Dogs and sends the assassin Patrice to intercept. After slamming a dump truck into the convoy, Patrice kills four Dogs and kidnaps River. This extensive sequence was shot in London at night in the financial district, but that part of London is still an area where a lot of people live. So there is no opportunity to have the sound of guns or blank muzzle flashes, Tim explains. It was all added in post. The dump truck that smashes into the SUV was also not able to be done in London. It was filmed at a separate location and then the aftermath was recreated by the art department in the real financial district for filming. The dump truck actually ramming the car was shot with green screens and black screens and lots of camera footage. We actually used much less than we shot, but we did use the footage to make up a series of plates so we could composite it successfully over a digital background.River Cartwright (Jack Lowden) a British MI5 agent assigned to Slough House.For the scene where jackson Lamb hits the assassin with a taxi they started with an interior garage as a clean plate and then had a statement on wires tumbling over it various angles and then married this together. Actually we ended up with marrying three Live action plates. We had the garage, the green screen plate of the stunt actor, but then we also got clean plates of the garage interior as we were removing and replacing certain things in the garage, Tim comments. We also had to do some instances of face replacement for that.Another instance of face replacement was the half brother of River who gets killed at the beginning of the series in Rivers fathers bathroom. Originally, this was a dummy in the bathtub but it looked a bit too obvious that it was fake, so an actor was cast and then the team re-projected the actors face onto the dummy in Nuke. Of course, there was also a lot of blood and gore in the final shot.Hugo Weaving as Frank Harkness, an American mercenary and former CIA agent.The film was shot at 4K resolution with exception of some drone footage which had to be stabilised and used for visual effects work so some instances the drone footage was 6K. This allowed the extra room to tighten up the shots, stabilise and match to any practical SFX or explosions.
    0 Comments 0 Shares 117 Views
  • WWW.FXGUIDE.COM
    Wonder Dynamics up the game with AI Wonder Animation
    One popular application of early GenAI was using style transfer to create a cartoon version of a photograph of a person. Snapchat also enjoyed success with Pixar-style filters that made a person seem to be an animated character, but these could be considered as effectively image processing.Runway has recently shown Act-One, a new tool that can be used to build an expressive and controllable tool for artists to generate expressive character performances using Gen-3 Alpha. Act-One can create cartoon animations using video and voice performances as inputs to generative models that can turn expressive live-action and animated content.Wonder Dynamics has escalated this to a new and interesting level with Wonder Animation, but outputing 3D not 2D content.Wonder Dynamics, an Autodesk company, has announced the beta launch of Wonder Studios newest feature: Wonder Animation, which is powered by a first-of-its-kind video-to-3D scene technology that enables artists to shoot a scene with any camera in any location and turn the sequence into an animated scene with CG characters in a 3D environment.Wonder AnimationThe original Wonder Studio Video-to-3D CharacterIn May, Autodesk announced that Wonder Dynamics, the makers of Wonder Studio, would become part of Autodesk. Wonder Studio first broke through as a browser-based platform that allowed people to use AI to replace a person in a clip with a computer-generated character. It effortlessly allowed users to replace a live-action actor with a Mocap version of the digital character. The results and effectiveness of the original multi-tool machine learning / AI approach were immediately apparent. From shading and lighting to animation and ease of use, Wonder Studio was highly successful and had an impact almost immediately.The most innovative part of the new Wonder Animations video-to-3D scene technology is its ability to assist artists while they film and edit sequences with multiple cuts and various shots (wide, medium, close-ups).Maya exportThe technology then uses AI to reconstruct the scene in a 3D space and matches the position and movement of each cameras relationship to the characters and environment. This essentially creates a virtual representation of an artists live-action scene containing all camera setups and character body and face animation in one 3D scene. Note, it does not convert the video background environment to specific 3D objects, but it allows the 3D artist to place the Wonder Dynamics 3D characters into a 3D environment where before, they were only placed back into the original video background.This is entirely different from a style transfer or an image processing approach. The output from Wonder Animations video-to-3D scene technology is a fully editable 3D animation. The output contains 3D animation, character, environment, lighting, and camera tracking data available to be loaded into the users preferred software, such as Maya, Blender, or Unreal.Even though there have been tremendous advancements in AI, there is a current misconception that AI is a one-click solutionbut thats not the case. The launch of Wonder Animation underscores the teams focus on bringing the artist one step closer to producing fully animated films while ensuring they retain creative control. Unlike the black-box approach of most current generative AI tools on the market, the Wonder Dynamics tools are designed to allow artists to actively shape and edit their vision instead of just relying on an automated output.Wonder Animations beta launch is now available to all Wonder Studio users. The team aims to bring artists closer to producing fully animated films. We formed Wonder Dynamics and developed Wonder Studio (our cloud-based 3D animation and VFX solution) out of our passion for storytelling coupled with our commitment to make VFX work accessible to more creators and filmmakers, comments co-founder Nikola Todorovic. Its been five months since we joined Autodesk, and the time spent has only reinforced that the foundational Wonder Dynamics vision aligns perfectly with Autodesks longstanding commitment to advancing the Media & Entertainment industry through innovation.Here is the official release video:
    0 Comments 0 Shares 112 Views
  • WWW.FXGUIDE.COM
    Cinesite for a more perfect (The) Union
    Netflixs The Union is the story of Mike, (Mark Wahlberg), a down-to-earth construction worker who is thrust into the world of superspies and secret agents when his high school sweetheart, Roxanne, (Halle Berry) recruits him for a high-stakes US intelligence mission.Mike undergoes vigorous training, normally lasting six months but condensed to under two weeks. He is given another identity, undergoes psychological tests, and is trained in hand-to-hand combat and sharpshooting before being sent on a complex mission with the highly skilled Roxanne. As one might expect, this soon goes south, and the team needs to save the Union, save the day, and save themselves.Julian Farino directed the film, Alan Stewart did the cinematography, and Max Dennison was Cinesites VFX supervisor. The film was shot on the Sony Venice using an AXS-R7 at 4K andPanavision T Series Anamorphic Lenses.Cinesites VFX reel:https://www.fxguide.com/wp-content/uploads/2024/10/Cinesite-The-Union-VFX-Breakdown-Reel.mp4During training Mike learns to trust Roxannes instructions in a dangerous running blind exersice. The precipitous drop makes the sequence really interesting and this was achieved by digital set removal so that the actors were not in mortal danger. Cinesites clean composting and colour correction allowed for the many car chase sequences and complex driving sequences in the film. This included a motorcycle chases, hazardous driving around London and a very complex three car chase.The three car chase with the BMW, Porsche and Ford was shot in Croatia and on this primarily used pod drivers or drivers sitting in a pod on top of the car doing the actual driving while the actors below simulating driving. We had to remove the pods, replace all the backgrounds, put new roofs on the cars replace the glass, and on some wider shots do face replacement when stunt drivers were actually driving the cars without pods, Max outlines. The team did not use AI or machine learning for the face replacements but rather, all the face replacements were based of cyber scans of the actual actors. This was influenced by the fact that in the vast majority of cases, the team didnt have to animate the actors faces, as the action sequences are cut so quickly and the faces are only partially visible through the cars windows. For for the motorbike chase above the motorbikes were driven went on location by stunt people whose faces were replaced with those of the lead actors. We had scans done of Mark and Halles faces so that we could do face replacement digitally through the visors of the helmets, explains Max Dennison. It is one of those films that liked to show London and all the sights of London, so we see Covent Garden, Seven Dials, Piccadilly Circus- a bit of a fun tourist trip, comments Max. Given the restrictions in being able to film on the streets of London the initial plan was to shoot in an LED volume. Apparently, the filmmakers explored this but preferred instead to shoot green screen and the results stack up very well. When Mike comes out of the Savoy hotel and drives on the wrong side of the road, all those exterior environments were replaced by us from array photography, he adds. Cinesite has a strong reputation for high end seamless compositing and invisible visual effects work, but in The Union the script allowed for some big action VFX sequences which are both exciting and great fun. The opening sequence of the suitcase exchange that goes wrong, the team was required to produce consistent volumetric effects as in the beginning of that sequence its raining and by the end it is not. Given that the shots were not shot in order, nor with the correct weather, the team had about 20 VFX shots to transition from mild rain to clearing skies through complex camera moves and environments around London.In addition to the more obvious big VFX work there was wire removal, set extension and cleanup work required for the action sequences. Shot in and around the correct crowded actual locations, there was still a need to use digital matte paintings, set extensions and apply digital cleanup for many of the exteriors. For the dramatic fall through the glass windows, stunt actors fell using wires and then the team not only replaced the wires but also did all of the deep background and 3D environments around the fall sequence. In the end the team also built 3-D class windows, as it was much easier to navigate the wire removal when they had control of the shattering glass. This was coupled with making sure that their clothes were not showing the harnesses or being pulled by the wires in ways that would give away how the shot was done.The film shot in New Jersey, (USA), London, (England), Slovenia, Croatia and Italy (street scenes). Principle photography was at Shepparton Studios, as the film was primarily London based. A lot of the stunt work was done in Croatia, at a studio set built for the film, especially for the roof top chase. Unlike some productions, the filmmakers sensibly used blue and green screen where appropriate allowing for the film to maximise the budget on screen and produce elaborate high octane action chase and action sequences. In total, Cinesite did around 400 shots. Much of this was Cinesite trademark invisible VFX work based on clever compositing, and very good eye matching of environments, lighting and camera focus/DOF.Given where the film finishes, perhaps this is the start of a Union franchise? Films such as The Union are fun engaging action films that have done very well for Netflix, often scoring large audiences even when not as serious or pretentious as the Oscar nominated type films that tend to gain the most publicity. And in the end this is great work for the VFX artists and post production crews.
    0 Comments 0 Shares 130 Views
  • WWW.FXGUIDE.COM
    fxpodcast #377: Virtually Rome For Those About To Die
    For Those About To Die is an epic historical drama series directed by Roland Emmerich. The director is known as the master of disaster. This was his first move into series television, being very well known for his Sci-Fi epics such as Independence Day, Godzilla, The Day After Tomorrow, White House Down, and Moonfall.The director, Roland Emmerich on the LED Volume (Photo by: Reiner Bajo/Peacock)Pete Travers was the VFX supervisor onFor Those About To Die. The team used extensive LED virtual production, with James Franklin as the virtual production supervisor. We sat down with Pete Travers and James Franklin to discuss the cutting-edge virtual production techniques that played a crucial role in the series completion.THOSE ABOUT TO DIE Episode 101 Pictured: (l-r) (Photo by: Reiner Bajo/Peacock)The team worked closely with DNEG as we discuss in this weeks fxpodcast. We discuss how virtual production techniques enhanced the efficiency and speed of the 1800 scenes which were done with virtual production and how this meant the production only needed 800 traditional VFX shots to bring ancient Rome to life and enabled the 80,000-seat Colosseum to be filled with just a few people.The LED volume stages were at Cinecitta Studios in Italy, with a revolving stage and the main backlot right outside the stage door. As you will hear in the fxpodcast, there were two LED volumes. The larger stage had a rotating floor, which allowed for different angles to be filmed of the same physical set (inside the volume), and the floor rotated, and so could the images on the LED walls.From Episode 101 (in camera)The actual LED set for that setup (Photo by: Reiner Bajo/Peacock)We discuss in the podcast how the animals responded to the illusion of space that an LED stage provides, how they managed scene changes not to upset the horses, and how one incident had the crew running down the street outside the stage chasing runaway animals!The shot in cameraBehind the scenes of the same shotThe team shot primarily on the Sony Venice 2, but the director is known for big wide-angle lens shots but trying to film an LED stage on a 14mm lens can create serious issues.From Episode 108 (final shot)Crew on set, in front of the LED wall of the Colosseum. Photo by: Reiner Bajo/PeacockThe team also produced fully digital 3D VFX scenes.
    0 Comments 0 Shares 159 Views
  • WWW.FXGUIDE.COM
    Q&A with DNEG on the environment work in Time Bandits
    Jelmer Boskma was the VFX Supervisor at DNEG on Time Bandits (AppleTV+). The show is a modern twist on Terry Gilliams classic 1981 film. The series about a ragtag group of thieves moving through time with their newest recruit, an eleven-year-old history nerd, was created by Jemaine Clement, Iain Morris, and Taika Waitit. It stars Lisa Kudrow as Penelope, Kal-El Tuck as Kevin, Tadhg Murphy as Alto, Roger Jean Nsengiyumva as Widgit, Rune Temte as Bittelig, Charlyne Yi as Judy, Rachel House as Fianna and Kiera Thompson as Saffron.In addition to the great environment work the company did, DNEG 360, a division of DNEG, which is a partnership with Dimension Studio, delivered virtual production services for Time Bandits. FXGUIDE: When did you start on the project?Jelmer Boskma: Post-production was already underway when I joined the project in March 2023, initially to aid with the overall creative direction for the sequences awarded to DNEG. FXGUIDE: How many shots did you do over the series?Jelmer Boskma: We delivered 1,094 shots, featured in 42 sequences throughout all 10 episodes. Our work primarily involved creating environments such as the Fortress of Darkness, Sky Citadel, Desert, and Mayan City. We also handled sequences featuring the Supreme Beings floating head, Pure Evils fountain and diorama effects, as well as Kevins bedroom escape and a number of smaller sequences and one-offs peppered throughout the season.FXGUIDE:? And how much did the art department map this out and how much were the locations down to your team to work out?Jelmer Boskma: We had a solid foundation from both the art department and a group of freelance artists working directly for the VFX department, providing us with detailed concept illustrations.The design language and palette of the Sky Citadel especially was resolved to a large extent. For us it was a matter of translating the essence of that key illustration into a three-dimensional space and designing several interesting establishing shots. Additional design exploration was only required on a finishing level, depicting the final form of the many structures within the citadel and the surface qualities of the materials from which the structures were made. The tone of the Fortress of Darkness environment required a little bit more exploration. A handful of concept paintings captured the scale, proportions and menacing qualities of the architecture, but were illustrated in a slightly looser fashion. We focused on distilling the essence of each of these concepts into one coherent environment. Besides the concept paintings we did receive reference in the form of a practical miniature model that was initially planned to be used in shot, but due to the aggressive shooting schedule could not be finished to the level where it would have worked convincingly. Nonetheless it served as a key piece of reference for us to help capture the intent and mood of the fortress.Other environments like the Mayan village, the besieged Caffa fortress, and Mansa Musas desert location were designed fully by our team in post-production. FXGUIDE: The Mayan village had a lot of greens and jungle was there much practical studio sets?Jelmer Boskma: We had a partial set with some foliage for the scenes taking place on ground level. The establishing shots of the city, palace and temple, as well as the surrounding jungle and chasm, were completely CG. We built as much as we could with 3D geometry to ensure consistency in our lighting, atmospheric perspective and dynamism in our shot design. The final details for the buildings as well as the background skies were painted and projected back on top of that 3D base. To enhance realism, the trees and other foliage were rendered as 3D assets allowing us to simulate movement in the wind. FXGUIDE: Were the actors filmed on green/blue screen?Jelmer Boskma: In many cases they were. For the sequences within Mansa Musas desert camp and the Neanderthal settlement, actors were shot against DNEG 360s LED virtual production screens, for which we provided real-time rendered content early on in production. To ensure that the final shots were as polished and immersive as possible, we revisited these virtual production backdrops in Unreal Engine back at DNEG in post. This additional work involved enhancing the textural detail within the environments and adding subtle depth cues to help sell the scale of the settings. Access to both the original Unreal scenes and the camera data was invaluable, allowing us to work directly with the original files and output updated real-time renders for compositing. While it required careful extraction of actors from the background footage shot on the day, this hybrid approach of virtual production and refinement in post ultimately led to a set of pretty convincing, completely synthetic, environments. FXGUIDE: Could you outline what the team did for the Fortress of Darkness?Jelmer Boskma: The Fortress of Darkness was a complex environment that required extensive 3D modelling and integration. We approached it as a multi-layered project, given its visibility from multiple angles throughout the series. The fortress included both wide establishing shots and detailed close-ups, particularly in the scenes during the seasons finale.For the exterior, we developed a highly detailed 3D model to capture the grandeur and foreboding nature of the fortress. This included creating intricate Gothic architectural elements and adding a decay effect to reflect the corrosive, hostile atmosphere surrounding the structure. The rivers of lava, which defy gravity and flow towards the throne room, were art directed to add a dynamic and sinister element to the environment and reinforce the power Pure Evil commands over his realm.Inside, we extended the practical set, designed by Production Designer Ra Vincent, to build out the throne room. This space features a dramatic mix of sharp obsidian and rough rock textures, which we expanded with a 3D background of Gothic ruins, steep cliffs, and towering stalactites. To ensure consistency and realism, we rendered these elements in 3D rather than relying on 2.5D matte paintings, allowing for the dynamic lighting effects like fireworks and lightning seen in episode 10. FXGUIDE: What was the project format was it 4k or 2k (HDR?) and what resolution was the project shot at primarily?Jelmer Boskma: The project was delivered in 4K HDR (3840 x 2160 UHD), which was also the native resolution at which the plates were photographed. To manage render times effectively and streamline our workflow, we primarily worked at half resolution for the majority of the project. This allowed us to focus on achieving the desired creative look without being slowed down by full-resolution rendering. Once the compositing was about 80% complete and creatively aligned with the vision of the filmmakers, we would switch to full-resolution rendering for the final stages.The HDR component of the final delivery was a new challenge for many of us and required a significant amount of additional scrutiny during our tech check process. HDR is incredibly unforgiving as it reveals any and all information held within each pixel on screen, whether its within the brightest overexposed areas or hiding inside the deepest blacks of the frame. FXGUIDE: Which renderer do you use for environment work now?Jelmer Boskma: For Time Bandits we were still working within our legacy pipeline, rendering primarily inside of Clarisse. We have since switched over to a Houdini centric pipeline where most of our rendering is done through Renderman.FXGUIDE: How completely did you have to make the sets for example, for Sky Citadel did you have a clear idea of the shooting angles needed and the composition of the shots, or did you need to build the environments without full knowledge of how it would be shot?Jelmer Boskma: I would say fairly complete, but all within reason. We designed the establishing shots as we were translating the concept illustrations into rough 3D layouts. Once we got a decent idea of the dimensions and scale of each environment, we would pitch a couple of shot ideas that we found interesting to feature the environment in. It would not have made sense to build these environments to the molecular level, as the schedule would not have allowed for that. In order to be as economical as possible, we set clear visual goals and ensured that we focussed our time only on what we are actually going to see on screen. Theres nuance there of course as we didnt want to paint ourselves into a corner, but with the demanding overall scope that Time Bandits had, and with so many full CG environment builds to be featured, myself and DNEGs producer Viktorija Ogureckaja had to make sure our time was well-balanced. FXGUIDE:Were there any particular challenges to the environment work?Jelmer Boskma: The most significant challenge was working without any real locations to anchor our environments. For environments like the Fortress of Darkness, Sky Citadel, Mayan City, and Caffa, we were dealing with almost entirely synthetic CG builds. For the latter two, we incorporated live-action foreground elements with our actors, but the core environments were fully digital.Creating a sense of believability in completely CG environments requires considerable effort. Unlike practical locations, which naturally have imperfections and variations, CG environments are inherently precise and clean, which can make them feel less grounded in reality. To counteract this, we needed to introduce significant detail, texture, and imperfections to make the environments look more photorealistic.Additionally, our goal was not just to create believable environments but also to ensure they were visually compelling. The production of these larger, establishing shots consumed a significant portion of our schedule, requiring careful attention to both the technical and aesthetic aspects of the work.The contributions made by all of the artists involved on this show was vital in achieving both these goals. Their creativity and attention to detail were crucial in transforming initial concepts into visual striking final shots. Reflecting on the project, its clear that the quality of these complex environments was achieved through the skill and dedication of our artists. Their efforts not only fulfilled the projects requirements but also greatly enhanced the visual depth and supported the storytelling, creating immersive settings that, I hope, have managed to captivate and engage the audience.
    0 Comments 0 Shares 124 Views
  • WWW.FXGUIDE.COM
    fxpodcast #378: Ray Tracing FTW, Chaos Project Arena short film we chat with the posse
    Chaos has releasedRay Tracing FTW, a short film that showcases its Project Arena virtual production toolset. Just before the film was released, we spoke with the director, writers, and producers, Chris Nichols, Daniel Thron, and Erick Schiele.As you will hear in this hilarious and yet really informative fxpodcast, the team worked with some of the most well known names in the industry and used virtual tech that allowed them to do up to 30 set-up shots during a standard 10-hour shoot day.In the film, the V-Ray environment of an Old West town was designed by Erick Schiele and built by The Scope with the help of KitBash3D and TurboSquid assets. The production used this environment for everything from all-CG establishing shots and tunnel sequences to the background for a physical train car set, which was able to be seen convincingly by using full ray tracing.The Director of Photography was Richard Crudo (Justified, American Pie), who shot nearly every shot in-camera, barring a massive VFX-driven train crash. The productions speed and flexibility were shown by the final 3D Hacienda, which the team bought online and got on-screen in under 15 minutes.Special thanks to Chris Nichols, director of special projects at the Chaos Innovation Lab and VFX supervisor/producer of Ray Tracing FTW.
    0 Comments 0 Shares 123 Views
  • WWW.FXGUIDE.COM
    VFXShow 287: Deadpool & Wolverine
    This week, the team discusses the VFX of the smash hitDeadpool & Wolverine,which is the 34th film in the Marvel Cinematic Universe (MCU) and the sequel toDeadpool(2016) andDeadpool 2 (2018). And not everyones reaction is what you might expect!Shawn Levy directed the filmfrom a screenplay he wrote with Ryan Reynolds, Rhett Reese, Paul Wernick, and Zeb Wells. Reynolds and Hugh Jackman star as Wade Wilson / Deadpool and Logan / Wolverine, alongside a host of cameos and fan-friendly references.Swen Gillberg was the production VFX Supervisor, and Lisa Marra was the Production VFX Producer. The VFX Companies that brought this comic/action world to life included Framestore, ILM, Wt FX, Base FX, Barnstorm VFX, Raynault VFX, and Rising Sun Pictures.Deadpool & Wolverine premiered on July 22, 2024. It has grossed over $1.3 billion worldwide so far, becoming the 22nd-highest-grossing film of all time, the highest-grossing R-rated film of all time, and the second-highest-grossing film of 2024.The DeadPool Multi-verse crew this week are:WallinPool * @mattwallin www.mattwallin.com. Follow Matt on Mastodon: @[emailprotected]DiamondPool @jasondiamond www.thediamondbros.comSeeMorePool @mikeseymour. www.fxguide.com. + @mikeseymourSpecial thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments 0 Shares 116 Views
  • WWW.FXGUIDE.COM
    Googles NotebookLM: everyone has a podcast about AI, even AI itself
    NotebookLM is a personal AI research assistant powered by Googles LLM, Gemini 1.5 Pro. If you upload a document or PDF to NotebookLM, it not only can summarise the information as Llama3 or ChatGPT does, but it can also produce a podcast-like discussion between two people about the content. It is incredible. (See our tests below).I uploaded a 40-page, highly technical academic paper, and within a few minutes, there was an 11-minute discussion about it, with jokes, slang, and apparent interest in the topic. The voices sound natural and conversational. If you have a 20-minute commute, you could upload a couple of complex SIGGRAPH papers and listen to your own personal podcast about the papers on your way to work. It is a jaw-dropping next step in conversational agents and LLMs.Will it replace podcasting? Well, as brilliant as it is, it does not generate insightful opinions about the value or applicability of the technology; it does not discuss whether this will be successful or require further development before it is adopted. It only discusses the PDF from the perspective of the paper itself, augmented by the LLMs general understanding of the topic. It is also not guaranteed to be accurate, it does not have any real understanding of the content, it is not significantly more clever than OpenGPT or any other LLM.It seems to be much like RAG + LLM. A Retrieval-augmented generation (RAG) is an AI framework for improving the quality of LLM-generated responses by grounding the model on external sources of knowledge. Vectors are produced that aid in the LLMs focus of knowledge so the system can use an LLM but have localised, specialised responses. In the case of NotebookLM, this is then filtered and presented via a conversation between two inferred commentators.Test Drive (What NotebookLM said about this article).Summary: The source discusses a new AI tool called NotebookLM, which uses a large language model (LLM) to summarise and discuss scientific research papers in a conversational format. It compares this tool to other AI frameworks like RAG (Retrieval-Augmented Generation) and explores potential impacts on the VFX industry. While recognising the potential for disruption, the source argues that these technologies may create new opportunities by enabling technical artists to better understand complex subjects and lead to the creation of novel visual experiences. The author emphasizes the need for VFX professionals to adapt and leverage these advancements to ensure their continued relevance and value.Audio Test Drive (This is NotebookLM discussing the article).Here is a NotebookML conversation audio version. Note that it made a mistake in the first minute regarding SIGGRAPH- but this software is labelled as Experimental.https://www.fxguide.com/wp-content/uploads/2024/09/NotebookMLFXG.m4aTest Drive (What NotebookLM said about my 40 page Academic article).https://www.fxguide.com/wp-content/uploads/2024/09/ISR.m4aImpact for VFX?The two voices sound remarkably natural, insanely so. Given the current trajectory of AI, we can only be a few beats away from you uploading audio and having voice-cloned versions so that these base responses could sound like you, your partner, or your favourite podcaster. The technology is presented by Google as an AI collaborative virtual research assistant. After all, the rate of essential advances coming out in this field alone makes keeping up to date feel impossible so a little AI help sounds sensible if not necessary.So why does this matter for VFX? Is this the dumbing down of knowledge into Knowledge McNuggets, or is it a way to bridge complex topics so anyone can gain introductory expertise on even the most complex subject?Apart from the apparent use of complex subjects to make them more accessible for technical artists, how does this impact VFX? I would argue that this or the latest advances from Runways video to video or SORAs GenAI all provide massive disruption, but also, they invite our industrys creativity for technical problem-solving. GenAI videos are not engaging dramas or brilliant comedies. Video inference is hard to direct and complex to piece into a narrative. And NotebookLM will be hard-pushed to be as engaging as any two good people on a podcast. But they are insanely clever new technologies, so they invite people like VFX artists to make the leap from technology demos to sticky, engaging real-world use cases. My whole career I have seen tech at conferences and then discussed with friends later, I cant wait to see how ILM/Framestore/WetaFx will use that tech and make something brilliant to watch. As an industry, we are suffering massive reductions in production volume that are hurting many VFX communities. I dont think this is due to AI, but in parallel to those structural issues, we need to find ways to make this tech useful. At the moment, it is stunningly surprising and often cool, but how do we use this to create entirely new viewer experiences that people want?It is not an easy problem to solve, but viewed as input technology and not the final solution, many of these new technologies could create jobs. I dont believe AI will generate millions of new Oscar-level films, but I also dont believe it will be the death of our industry. Five years ago, it was predicted wed all be in self-driving cars by now. It has not happened. Four years ago, radiologists would all be out of a job, and so it goes.If we assume NotebookLM is both an incredibly spectacular jump in technology and not going to replace humans what could you use it for? What powerful user experiences could you use it for? Theme park and location-based entertainment? AVP Sport agents/avatars? A new form of gaming? A dog friendly training tool ?AI is producing incredible affordances in visual and creative domains, why cant the Visual Effects industry be the basis of a new Visual AI industry that takes this tech and really makes it useful for people?
    0 Comments 0 Shares 113 Views
  • 0 Comments 0 Shares 290 Views
  • 0 Comments 0 Shares 286 Views
More Stories