
Wonder Dynamics Helps Boxel Studio Embrace Machine Learning and AI
www.awn.com
Safe to say, machine learning and AI are here to stay, and it is this realization that led Boxel Studio to adopt and integrate Wonder Dynamics Wonder Studio, a cloud-based 3D animation and visual effects software toolset that combines artificial intelligence with established tools, into their VFX pipeline. Boxel isnt alone in its belief of Wonder Studios value. Just last year, Autodesk purchased Wonder Dynamics, integrating their cutting-edge technology into Flow, the tech leaders M&E cloud on Autodesks Design and Make platform.Boxels partnership with Wonder Dynamics allowed the studio to implement a markerless motion capture system that made it possible to create 134 creature animation shots in six weeks for the final season of the CWs Superman & Lois.Boxel Studio was able to push things even further by developing custom Python-based retargeting tools to seamlessly integrate the machine learning motion data into their production rigs in Maya, Unreal Engine and Cascadeur.Our goal was always not about digital doubles but creating characters that don't exist in real world, whether you're doing a film with an alien or a robot, states Nikola Todorovic, Founder, Wonder Dynamics. We were very cautious about not building something to serve facial replacements or deepfakes. Tye Sheridan and I started this because every time we wrote something it was sci-fi and animation which would be $100 million film. We started with this technology early on because we wanted to figure out how we could tell stories that are visually bigger but at the much lower end of the budget. Then we realized that, This is much bigger than us. Let's turn into a platform. That's how Wonder Studio was born. Todorovic and Sheridan worked in stealth mode for 4 years, in part because they were early in the AI bonanza, as well as because their approach was unique. This took us awhile because we stumbled upon AI much earlier than when it became the hype of the industry, Todorovic explains. AI was so sci-fi at the time and was hard to explain to people what we're talking about. We also approached it a little bit differently in that we don't generate art. The data you get out of Wonder Studio, whether it's your camera track, your lighting info, or your animation data, can be put in a DCC that you already use, like Maya, Unreal Engine or Blender. It's meant to speed up your traditional visual effects pipeline without many requirements on the set. You don't need mocap suits or to shoot clean plates.One of the prevailing fears about machine learning is that imagery will become generic. We wanted to keep that movie magic, so for us, your shot or lighting on your character is only going to be good as it's lit onset, remarks Todorovic. What our system does is analyzes the lighting, color and noise from the plate. We're not generating that based on something else; that is same for characters. It's the artist who is uploading the character. When we first launched, we worked on a project with the Russo brothers and hired professional artists. We asked them, What data and passes do you need to be able to plug this into your existing pipeline? It was important for us to get that feedback from that side, and that's one of the reasons we don't have prompting on our platform. We're big believers that the performance should be driven by an actor because I can't describe a performance in words as much as I want. Every actor is going to perform differently, and that's the beauty of the art. Every cinematographer is going to read a script and do a shot differently than another cinematographer. A director will pick some choices differently. Noting that everyone is confusing tools meant for social media with tools meant for high-end filmmaking, Todorovic says, Most of the AI video tools out there are meant for social media, and the goals are very different. Someone making something for social media is not necessarily going to spend that much time on certain choices. However, in filmmaking, you're going to spend a lot more time on every single detail to make sure its tonally consistent throughout the film.Part of Wonder Studios recent evolution involves dealing with occlusion. According to Todorovic, We knew that occlusion is a big issue, where the actor will go behind some object or behind each other. Were actually releasing something called motion prediction, which uses AI to essentially guess what the characters or actors are doing when theyre not visible. I don't believe we're building one tool that's going to take over the entire pipeline. I don't believe anybody's going to do that. My belief is just like now, we will have multiple tools in a pipeline. For us it is about how I can speed up that process but still communicate with all the other tools where I need to specialize. I like the Boxel Studio team because they're problem-solvers and are not scared to try different things inside of production. Both companies firmly believe machine learning can be used constructively. We should use the tools that we have on our hands to accelerate our creative process, notes Andres Reyes Botello, VFX producer, Boxel Studio. That is different from, Is it okay to simply grab whatever image you generated on the Internet without clarifying or guaranteeing that you have the chain of rights of that?One example where new machine learning-based tools can significantly streamline laborious technical work is rotoscoping. Even today, you have to hire an arsenal of people to go frame by frame with vectors and points to cutout the actor, remarks Botello. In my opinion, thats grunt work. If the computer can do it automatically, you're actually liberating humans to do more creative work rather than repetitive, boring work. AI and machine learning are going to open a huge door for better stories and much more creativity. Everybody can become a storyteller, not only a technician. Rather than be fearful about these technologies, we need to figure out how they help us to craft engaging stories. Coming out of the recent writers and actors strikes, the realities of reduced production time and budget motivated the use of Wonder Studio on Superman & Lois. If you were to present this type of solution to a big budget film, they would have been more hesitant because it was something completely new, states Freddy Chvez Olmos, VFX Supervisor & Creative Director of AI & Innovation, Boxel Studio. But we were lucky that the showrunners took our word, and we delivered the show. In Mexican society, we always figure out things even when we dont have the resources. It was cool seeing an animator doing his motion capture with a garbage bag, simulating Superman being carried up by Doomsday. We found the right people to give the right tools. AI and machine learning also allowed for a closer creative partnership with stunts. We were blessed with having Stunt Coordinator Rob Hayter and his amazing team, states Botello. They had a real actor onset doing the motion of Doomsday. Prior to Wonder Dynamics, we had to grab that reference footage, and the animators would have to go by hand and try to replicate that as well as record themselves on camera. It would take a good animator maybe three weeks to output eight seconds of animation. But now with Wonder Dynamics, the animators can process the footage of the actual performer onset and on the same day get the mocap data inside of their DCC, like Maya. Now the animator can say, What a cool move this performer made. What if I make it better by adding this extra motion? Instead of wasting three weeks to get something really cool, we are able to give two or three options to the showrunners. I do think that animators who embrace this will have more creative options to deliver to their clients.Training data for machine learning requires a lot of GPU power, which makes cloud computing an important component for Wonder Studio. Hopefully, Wonder Dynamis starts making money because right now that computing power is expensive, remarks Botello. The way that Wonder Studio works is you upload your video and machine learning interprets that footage and recognizes the human motion that is then retargeted into a CG character rig. That is heavy in computing power and requires a lot of GPUs. The solution that Wonder Dynamics offers is affordable. We pay a particular amount and can process a lot of footage. Its more cost effective than buying a couple of motion capture suits and buying cameras for triangulating that data. It is getting to the point in 2025 that rendering in the cloud makes more sense than having hardware that becomes obsolete on premise.Central to Boxels use of Wonder Dynamics is their openness to new techniques and technologies. I'm in this position because I was tired of the industry being always the same, Olmos shares. Doesn't matter if you work on a big superhero movie or on a small-scale TV show. Your job becomes repetitive. But being on this side of the equation, where youre doing more R&D, innovation and AI, thats something that excites me and I can see other people on the team also getting excited, because they get to be creative. Machine learning continues to alter the dynamics of the visual effects pipeline. When you are compositing a shot, you need to rely on other resources, like match move and digital doubles, that you get at the end of the chain, states Olmos. But now, compositors are embracing this technology because they can do so much more themselves. It lets them become more of a generalist in a way, which I love, because you get to learn other departments. Before you were constrained. You were just a compositor or modeler.Of course, successful integration of any new toolset requires understanding its limitations as well. There are always limitations to new technologies, Olmos says. You always want to have creative control. And in order to have creative control, you have to learn all the traditional tools needed to solve problems, or workarounds needed to use the new generative AI or machine learning tools. Wonder Studio was a perfect solution, and its secret sauce is not creating the final rendered image but providing all of the components that go into that image. We can extract the motion capture as one of the elements and then give that to animators. Right now, we're more interested in the types of tools that give you control and allow you to iterate.Olmos concludes, You have to come up with a solution that doesnt exist. A lot of companies are catching up, but at the moment, their efforts are still explorative. We are one of the few who are taking a risk by exploring and figuring out things, because regardless of what you think about AI or machine learning, its going to give you a competitive advantage. Trevor Hogg is a freelance video editor and writer best known for composing in-depth filmmaker and movie profiles for VFX Voice, Animation Magazine, and British Cinematographer.
0 Комментарии
·0 Поделились
·60 Просмотры