• Venture funding in Europe in 2024 fell to $45 billion, says Atomico
    techcrunch.com
    Funding for European tech appears to have stabilised in 2024 after dropping precipitously in 2023, but the signs continue to point to more tough times ahead, according to the latest State of European Tech report.The annual survey produced by European VC firm Atomico notes that startups in the region are on track to raise $45 million this year. While far from the 50% drop of 2023, the figure is still down by $2 billion compared to a year ago. (Note: Atomico originally projected $45 billion for 2023; it has since revised 2023 up to $47 billion.)Atomico has been producing these reports annually for the last decade so this latest edition makes a lot of noise about how much things have grown. Its undeniable that the tech ecosystem in Europe has blown up: Atomico says that there are now 35,000 tech companies in the region that could be classified as early stage, with a 3,400 late-stage companies and 358 valued at over $1 billion. Compare that to 2015, when there were a mere 7,800 early-stage startups, 450 late-stage startups and just 72 tech companies valued at over $1 billion. Yet there is a lot of sobering reading, too, about some of the challenges of the moment and signs of how geopolitical and economic unrest despite that shiny stories about the boom in AI continue to weigh down the market.Here are some of the breakout stats:Exits have fallen off a cliff. This is one of the more stark tables in the report that underscores some of the liquidity pressure that ultimately trickles down to earlier-stage tech companies. Put simply, M&As and IPOs are relatively non-existent right now in European tech. 2024, at the time of the report being published in mid-November, saw just $3 billion in IPO value and $10 billion in M&A, according to S&P Capital figures. Both of these a big drops on the overall trend, which had otherwise seen steady rises in both, consistently surpassing $50 billion per year threshold. (Granted, sometimes all it takes is one big deal to make a year. In 2023, for example, ARMs $65 billion IPO accounted for a full 92% of total IPO value, and clearly it didnt have the knock-on effect many had hoped for in kick-starting more activity.) Transaction volumes, Atomico notes, are at their lowest points in a decade.Debt on the rise. As you might expect, debt financing is filling in the funding gap especially for startups raising growth rounds. So far this year, debt financing made up a full 14% of all VC investments, totalling some $4.7 billion. Thats a big jump on last year, according to Dealrooms figures: in 2023, debt made up just $2.6 billion of financing, accounting for 5.5% of all VC investments.Average round sizes bounce back. Last year, the average size of every stage of funding from Series A to D all declined in Europe, with only seed stage rounds continuing to increase. However, amid an overall decline in number of funding rounds in the region, those startups that are managing to close deals are, on average, raising more. Series A is now $10.6 million (2023: $9.3 million), Series B $25.4 million (2023: $21.3 million), Series C $55 million (2023: $43 million). The U.S. continues to outpace Europe on round sizes overall.But dont expect rounds to be raised in quick successions. Atomico noted that the number of startups on average raising within a 24-month timeframe declined by 20%, and it has taken longer for a company to convert from A to B on what it calls compressed time frames of 15 months or less, with just 16% raising a Series B in that period in 2024.As you can see in the table below the number of rounds in this year is down on the year before.AI continues to lead the pack. As with 2023, Artificial intelligence continued to dominate conversations. Atomico spells this out with a graphic showing the burst of AI mentions in earnings calls:And that has carried through as a strong theme among private companies. Between companies like Wayve, Helsing, Mistral, Poolside, DeepL and many others, AI startups have led the pack when it comes to the biggest venture deals this year in Europe, raising $11 billion in all. Yet even so, Atomico points out, Europe has a long way to close the gap with the U.S. in terms of AI funding. Thanks to outsized rounds for companies like OpenAI, all told the U.S. shaping up to have invested $47 billion in AI companies this year thats right, $2 billion more than all startup investment in Europe, combined. The U.K. (thanks to Wayve) is currently the biggest market for AI funding in the region, it said.Valuations improving After startup valuations bottomed out in 2023, Atomico writes, they are now heading back up, a lagged result of the slow return of activity in the public markets. Some of that is likely also due to the outsized rounds raised by certain companies in certain fields like AI. More generally, the rule appears to be that founders are more open to dilution on larger rounds in earlier stages and that plays out as higher valuations. Then startups raising at later stages are picking up the pieces of that earlier exuberance and are raising down rounds, Atomico said. European startups continue to see valuations on average lower than those of their American counterparts, on average between 29% and 52% lower, Atomico notes.(In the graphic below, charting Series C, the average valuation for a U.S. startup is $218 million, compared to $155 million for startup in Europe.)But sentiment is not. If confidence is a strong indicator of the health of a market, there might be some work ahead for the motivators in out there. Atomico has been polling founders and investors annually asking how they feel about the state of the market compared to a year ago, and 2024 appears to a high watermark for low confidence. In a frank assessment of how founders and investors are viewing the market at the moment, a record proportion respectively 40% and 26% said they felt less confident than 12 months ago.
    0 Comments ·0 Shares ·157 Views
  • Topology for Animated Characters
    thegnomonworkshop.com
    Efficient Techniques using Maya & ZBrush with Joo VictorLearn how to build a complete character for animation with clean topology. Being a modeler in the animation industry puts you in a flexible position but also demands a lot of you, both artistically and technically. In this 2-hour workshop by Joo Victor, who has contributed to movies including Pixars Inside Out 2, he demonstrates the technical side of character modeling for animation, including the very important topological solutions needed to ensure clean models that correctly deform when animated.This comprehensive workshop begins with the basics and works through to more advanced techniques using Maya and ZBrush. For Joo, topological tasks are like challenging games; artists need to create unique solutions for different characters to create meshes that are as efficient as possible. The work done by modelers is critical for the success of characters as they move through the animation pipeline to be used by various departments. Joo not only discusses the important topology concepts that professional modeling teams consider when preparing meshes for other departments in a studio environment but also provides professional tips and tricks for you to take with you into your personal and studio projects alike.WATCH NOW
    0 Comments ·0 Shares ·253 Views
  • Paramount Pictures Shares Final Gladiator II Trailer
    www.awn.com
    Were loving the battle rhino in the visually stunning, often brutal ancient Roman world of Sir Ridley Scott's Oscar-winning Gladiator sequel, coming to theaters November 22.
    0 Comments ·0 Shares ·177 Views
  • Paul Lambert Returns to Arrakis for Dune: Part Two VFX
    www.awn.com
    After striking Oscar gold in the spice fields of Arrakis for the visual effects in Dune, Paul Lambert returns to helm the VFX on the sequel, Dune: Part Two, which further expands the storytelling and the epic vision of filmmaker Denis Villeneuve. The story picks up with House Harkonnen taking over the prized planetary possession and decimating the House Atreides which has sought refuge and gathering rebellion support from the Indigenous desert dwellers known as Freman. 2,156 shots were produced by DNEG, Wylie Co., and Territory Studio with concept art provided by Rodeo FX and previs by MPC. Denis Villeneuve sees the movie and of course there are times hell embrace a slightly different approach, but he knows what he wants, states the production VFX supervisor. Thats what makes it such a pleasure and joy to work with him. Because you know when you do a particular shot that the background isnt going to become something completely different. We have a certain trust with each other as to how we approach the visual effects. I know that if there has been a concept, were going to stick to that, and that allows me to setup the shots in a way which is good for the over-all composite. The idea of shooting sand screens came from me knowing what that background or a proxy version of it was going to be.Unlike Ridley Scott, who is known to use as many as 12 cameras at one time, Villeneuve and cinematographer Greig Fraser favor framing and lighting for a single camera. Honestly, from a visual effects standpoint its a godsend! laughs Lambert. The moment you start to add additional cameras, especially four, you know that there are going to be certain cameras that will always be compromised because you cant plan for four cameras. Great attention was paid to get the desired backlight for each shot. According to Lambert, I introduced Greig to the world of LiDAR scanning apps on an iPhone. When we were going on another recce you would look around and ask, Wheres Greig? And hed be over somewhere scanning the earth or the actual rock structure. That geometry was brought into Unreal Engine.Shadows were critical for the Harkonnen harvester attack that takes place during daylight, in the middle of the desert, so special effects supervisor Gerd Nefzer and his team had industrial tractors hold large black screens in designated areas. Its all about sun and shade, notes Lambert. We needed to think of a visual way to understand what that shadow would do. We had an iPad with custom software from DNEG where we had a huge spice crawler in there. But we were also able to cast a shadow from a spice crawler. We could look at where the camera was going to be, see the structure and see where the shadow was on the ground. We definitely dont want to be running [the characters in the film] into this particular area because we dont have a real shadow for that. One of the main rules I had with Denis was, To try to keep things believable I never want to change the lighting on a character. If we shoot in the daylight and I try to make Paul look as if hes in shadow, it will look wrong. Theres nothing I can do to make that look correct. Another rule in the desert is we never step through previous footprints, which meant whenever we destroyed an area with footprints we would move over. If we couldnt move over to shoot, we would then rake the sand. Trying to simulate that is a big old problem. Given the time of year in the United Arab Emirates, it was extremely difficult locating a backlit dune with the correct wind direction for the iconic moment where Paul Atreides rides a sandworm. We found one, and that was the dune we would run along and then replicate to create the cascade, explains Lambert. But also, when you see some closeups of Paul running on top of the dune, we had to replicate the peak of the dune because what we needed to do is create a physical collapsable dune. What we came up with were these three huge steel tubes attached to industrial tractors. These were embedded into the dune we had created. A stuntie attached to a wire with a camera on a crane behind him would run, we would then callout for those tubes to be pulled out, the dune would collapse, then the stuntie would fall down and the camera would follow. Because of the light direction, which we needed to match the photography, we could only do it at one particular time of day. Obviously, there was a massive reset. We could do it once a day in the morning before we went off and did other things. It took us four attempts. With that element of the stuntie falling down and the camera behind kicking up sand, what I had to do was extend out the rest of the dune collapsing up ahead and the worm coming out so that you felt as if youre way the heck higher. Thats one of those things where if you get all the particulars correct it then works as a shot. Obviously, there is a lot of digital. But you have a basis that is always something real. The decision to deploy infrared cameras to emulate the black sun exterior environment of Giedi Prime brought with it some unpredictable results. We did a multitude of tests before we started the main shoot, recalls Lambert. We were going to shoot it outside between the two stadiums and on white sand, so it was a high contrast area where you had shade and this whiter than white. It was in this grey look. We tested everything. I even tested my gaffer tape for tracking markers. Cut to the day of the shoot when the fighters appeared. The three fighters appeared, and they looked great, muscular, ready to fight. And then we saw one of them through the infrared camera. They had covered his tattoos in makeup. However, in good old infrared, you get to see that. And, he had tattoos all over his body! I asked Denis, Does this fit the aesthetic? He said, No. I had to have Wylie Co. remove that, which was a substantial job. But its the day of the shoot. What are you going to do? We had to shoot.Lambert also needed to accommodate where the studio walls and buildings were casting shadows behind Feyd-Rautha and the other fighters. I made a decision, he notes. Rather than try to remove that - because if you go from something bright to something dark its always going to look bad - I decided to keep those shadows and played them as if they were stadium shadows from the big towers. If you were to look at it as a whole, it wouldnt make sense but in the fight sequence it works well. The fireworks on Giedi Prime scenes also evolved considerably during the shoot. When Feyd and Lady Margot Fenring are walking down the hall and she seduces him, we built a set with these huge structures internally, explains Lambert. The idea was we had fireworks outside that would have a certain pattern that would then play on the interactive light inside and we would extend those. But the actual fireworks, where you see the burst that went through a big development process. They looked completely different than what you see in the film. We had this idea of seeing holes in the atmosphere, and each time one opened it would open a blackness from within the white sky. But Denis was not keen on it until my producer found this little video of ink inside water, which Denis loved, and thats basically what it became!Hardly veering from the original concept was the Orni Bee. We built sections of the back of the Orni Bee for when Glossu Rabban is hanging off it and fighting the Freman. He was holding onto a particular rig, which we built close to the ground but then played as if it was way higher. We also built the Orni Bee for when theyre all taking off from Arrakeen to fight Paul out in the desert. Its a great big physical build. We didnt lift this one into the air because it was way heavier than the original ornithopter. When its flying, we built some silks around it on the actual horizon, which we then augmented. Having a practical asset helps to inspire Denis and the actors, and in the end helps visual effects because I always have something to actually work from.Paul Atreidess vision of a cataclysm became a major visual effects scene. We shot practical actors falling on the ground, who then got body parts replaced by CG to make them way thinner, while all the other characters you see in the background are all CG, states Lambert. It became a big visual effects shot. At one point the look of that particular sequence was going to be way the heck far out there because when we were shooting the plates Greig chose to do some close focus work, and everything was just shapes. It would have been hard to actually get the work inside of those plates. We pulled back a little bit to what the actual visual was going to be. Along with the visions there were holograms. Territory Studio did the Harkonnen tabletop in the city where you have Fayd and the Baron overseeing their bombing tactics on the Freman, shares Lambert. It was beautifully designed. Basically, there were some initial designs from Patrice but then Territory Studio took those to the next level in designing how to visualize a war in progress, like seeing where the spaceships and trajectories were. Denis had a lot of backwards and forwards and creative discussions directly with Territory Studio trying to get his story point across. It was a good relationship. Three puppeteers were responsible for the baby sandworm. When the baby worm was underground, that was a special effects rig that was a ball and chain being pulled, reveals Lambert. Those movements you see in the sand are practical. Then the actual puppeteers would puppeteer the baby worm when it wraps itself around the actress. She carries it and put it under the water. All the puppeteers are in the water moving the puppet. What we did in CG was whenever you see the worm above the water, we would compress the scale. But the main part of that is a puppet. We deemed that was the best approach. There were a multitude of techniques used depending on [what was required for the shot]. Its a philosophy that Denis and I have had throughout our other movies. What is the best way to actually make something believable? To make sure that you dont know that Ive done anything to it! Trevor Hogg is a freelance video editor and writer best known for composing in-depth filmmaker and movie profiles for VFX Voice, Animation Magazine, and British Cinematographer.
    0 Comments ·0 Shares ·176 Views
  • Deadpool & Wolverine: Digi-Doubles Breakdown by Framestore
    www.artofvfx.com
    Breakdown & ShowreelsDeadpool & Wolverine: Digi-Doubles Breakdown by FramestoreBy Vincent Frei - 18/11/2024 How do you turn actors, dogs and flying head into digital superheroes? Framestores latest making-of reveals their techniques for crafting digital doubles of Deadpool & Wolverine!WANT TO KNOW MORE?Framestore: Dedicated page about Deadpool & Wolverine on Framestore website.Matthew Twyford: Heres my interview of Matthew Twyford, VFX Supervisor at Framestore.Swen Gillberg & Lisa Marra: Heres my interview of Production VFX Supervisor Swen Gillberg and Production VFX Producer Lisa Marra. Vincent Frei The Art of VFX 2024
    0 Comments ·0 Shares ·152 Views
  • Venom The Last Dance: John Moffatt and Aharon Bourland Production VFX Supervisors
    www.artofvfx.com
    Back in 2012, John Moffat shared insights into DNEGs visual effects for Snow White and the Huntsman. Since then, he has overseen the visual effects on a wide range of shows, including Life, The 15:17 to Paris, Wonder Woman 1984, and Secret Invasion.Since starting her visual effects career at Tippett Studio in 2003, Aharon Bourland has been involved in the creation of visual effects for various films such as After Earth, Avengers: Infinity War, Ghostbusters: Afterlife, and The Matrix Resurrections.How was the collaboration with Director Kelly Marcel?John Moffatt // Kelly and I got on really well from the start of the show. Kelly is super talented, clear about her direction. She is also really collaborative and happy to listen to ideas and suggestions for how something may work. Through all of the stages of the movies production we discussed ideas, reviewed concept material, and iterated on shots and assets. I really love working with Kelly.AharonBourland // Working with Kelly Marcel was such a creative and open experience. She really encouraged us to bring depth to the characters. With her background in writing, her focus was always on the story and how our characters could really add to it. One of my favorite examples is how she encouraged me to keep building up the character of Lasher. She started out as a small role, but by the end, she became one of the most memorable parts of the film!John, what fresh ideas did you bring to the Venom world, and Aharon, how did you help integrate those with the established visual style from your past experience?John Moffatt // Im not sure that there are many fresh ideas, I think most good ideas are simple and its about how we as a group implement those ideas. One of things that we had to do on this movie was see Venom in the daylight. Something that had not been done in the previous two movies. I was very keen to test that early on the production and we did that with a good degree of success. Venom has not really got any Diffuse component he is pretty much all spec and reflection so it was an exercise that required a fair bit of iteration before we landed on something that we all liked. In terms of ideas, and keeping things simple I try to have clarity with the Director in terms of what they want and then look to give them as much time as possible to evolve the work as we move through the production.Aharon, John Moffatt joins you as VFX supervisor on this one. How did you integrate his contributions with the established visual style from your past experience?AharonBourland // John and I have a long history of working together, and our skills really complement each other. He comes from a compositing background and has an incredible eye for color and composition, while I lean more toward animation and FX. We each brought our strengths to the table to take Venom to the next level for this film.How did you organize the work between you two and with your VFX Producer?John Moffatt // I brought on Aharon Bourland as I think she is a fantastic talent and I love working with her. David Lee was our DNEG Supe and we also worked with Paul Franklin as we have known each other for a long time. At ILM, Simone Coco headed up the team. Greg Baxter was our Production side Producer.AharonBourland // We divided up the VFX work by vendor. I was responsible for ILM, Digital Domain (DD), and Instinctual, while John handled DNEG and Territory Studio. After I came on, I also took care of the Post Vis with The Third Floor. Greg Baxter was our Producer across all the vendors, and Mickael Bec Velazquez worked alongside him as his Associate Producer. It was a smooth system that helped keep everything running efficiently!What is the your role on set and how do you work with other departments?John Moffatt // I enjoy being on set working with the team. My role is to try and be part of the team and shoot the best possible material for the movie. With a specific eye towards how will the material we are shooting work for vfx.AharonBourland // Before the actual shoot, we collaborate with all the departments to figure out what elements will be shot and what well need to cover. On set, our main job is working with the directorsKelly Marcel for the main unit and Brian Smrz for second unitto help everyone understand how the VFX will be integrated into the footage were shooting. It can get pretty complex, especially when were dealing with animated characters that dont exist in the real world. I often find myself acting out scenes on the day so people can better visualize whats going to happen. Sometimes, Ill do quick drawings over stills to help make things clearer. I also work closely with the camera team to make sure we capture enough clean plates and alternate angles, so we have everything we need to support the work as it evolves in post-production.How did you choose the various vendors and split the work amongst them?John Moffatt // We decided to use DNEG because they had a history with the other two movies, I also worked there for almost two decades and had a good relationship. We chose to work with ILM because they are fantastic and we also had a great relationship with some of the key folk in the team they put forward for the job. We worked with Rodeo FX, because I had worked with them before and they had been great partners and we also worked with DD.We decided to put the entire third act with one vendor and the rest with another but as things developed as the work changed shape during production. Basically ILM did the River and the Third Act, DNEG did most of the rest of it, Rodeo did the Beach Flashback and DD did the Knull work.AharonBourland // We chose to work with DNEG because they had a solid history with the previous two movies, and me and john had work there for many years, so we had a great relationships built on.ILM was an easy choice because theyre fantastic, and we also had strong connections with some of the key people on their team for this project. We worked with Rodeo because john had partnered with them before, and they were great to work with, and we also collaborated with DD.How has the visual representation of Venom evolved across the three films in the saga? What new techniques were used in the latest chapter?John Moffatt // We wanted to give Venom the ability to give as emotional performance as possible, so we developed new face shapes for him. We tweaked the shape of Wraith Venoms head to make it less flat across the top. But really the main evolution was in the daylight look dev.AharonBourland // The look of Venom really evolved naturally over the three films. The first movie was kind of like a prototypewe were figuring out the basic forms, like the slug, wraith, and full symbiote. Since the whole movie took place at night, we mostly focused on how his skin reflected light. In the second movie, we focused more on refining Venoms animation and performance. A lot of the breakthroughs were actually with Carnage and how we used procedural animation to have him grow and even envelope a church.For the third and final chapter, we added a lot more detail to Venoms shading, especially so he would look better in daylight. We also had to upgrade the face animation system so Venom could express a wider range of emotions, which was really important for showing his relationship with Eddie. Plus, Venom got to take on some new forms in this moviehe became a fish, a frog, and even a horse! But even with these new looks, we made sure each one still felt like Venom. And we took everything we learned from the procedural animation and geometry generation from the big merge fight in the first film and Carnages transformations in the second film to create the massive final form, the Venomphage, which was a mix of Venom and five Xenophages.How did you ensure that each symbiote in this film had a unique visual identity, from their colors to the way they interact with the environment?AharonBourland // Each symbiote started with research into their comic book versions, and from there, we evolved those into hero versions that fit the story. It was really important that each symbiote had a unique set of abilities that complemented the others and helped move the story forward. For example, Lavas red and yellow fire whip worked well with the tendril cage that Animal/Tendril used to restrain the Xenophage. Dr. Paynes symbiote was also tied thematically to her character. Her life was changed by a lightning bolt, and when she bonded with the symbiote, it gave her lightning-like abilities, bringing her story full circle.What challenges did you face in portraying the complex relationship between Venom and Eddie Brock, visually speaking, in this third film?John Moffatt // I had a good relationship with Tom (Hardy) and we discussed how he wanted to do things, Kelly and Tom are really close and so we let Toms performance drive how we animated Venom. Tom is really good at keeping his eyeline alive and that constant vibrancy that he brings gave the animators great material to work with.How did the design and animation of the symbiotes interactions with their hosts evolve in this chapter compared to previous films?AharonBourland // The rules from the first two movies mostly stayed the same, but we did loosen one rule a bit: the symbiotes could bond with their hosts more easily this time around. The big advancement was in the design of the symbiote army that helped Venom in the final battle. Each symbiote had its own unique abilities, which really influenced how they behaved and how the fight was choreographed. For example, Jim (the copper-brown symbiote) was a bruiser with powerful punching fists, while Lasher (the green-red symbiote) was quick and a slashing attacker. As a team, each symbiote played a key role in the fight. We also introduced something new by having two of the symbiotes combine to create a hybrid with new powers. We did this with Animal and Tendril, and it really added another layer to the action!Venoms abilities have grown and evolved over the trilogy. What new effects or abilities did you introduce for Venom in this film that were particularly challenging or exciting to create?AharonBourland // Having venom bond with multiple different animals was the big addition to this film.In creating the Xenophage, what were the key design elements that made it a terrifying new creature? How did you approach its movement and texture?John Moffatt // We had a design that Kelly loved when I joined the show. Karl Lindberg had done a concept and DNEG had done some evolution of it and created a really solid concept version of the asset. We commisioned a movment study very early in prep and Chris Lentz and Chas Jarret at DNEG created a sequence that people at the Studio and on the Production got really excited about. Once we had a clear direction in terms of how it was going to move and behave we moved the asset back into build and created a movie quality asset based on the coneot version that was used for the movement study. The movement was actually all in black and white. So we also developed the lookdev of the creature as we refined the asset but Kelly had a very clear idea of how she wanted it to look.The Xenophage has a very distinct and menacing presence. How did you integrate practical effects with CGI to make it feel as realistic and frightening as possible?John Moffatt // As I mentioned at the outset most good ideas are simple. So we looked at all of the moments in the script when she was going to be in the scene and worked with SFX to create interactive effects that would enhance the creatures on screen presence. Often we will shoot with and without specific preactical effects in the event that things change in post and plates that have been shot with specific purpose in prep and shoot change and end up being used differently in post.AharonBourland // Whenever we could, we used practical SFX for the Xenophages interactions with the environment. This included real explosions, ratchet pulls on set pieces, and even flipping CanAms. Since we filmed so many practical effects, we had tons of great reference to use when we needed to recreate or enhance them with CGI. Another advantage of having these practical elements and stunt work was that it allowed us to choreograph the shots like we would for an action sequence. This really gave the action a grounded, realistic feel and helped make the Xenophages even more menacing.Can you walk us through the process of creating the battle sequences between Venom, the new symbiotes and the Xeonphages? What were the biggest technical hurdles?AharonBourland // Creating the battle sequences was a long, evolving process. It all started with previs, where we worked out the general outline of the action with Kelly and Brian. Once we had a rough idea, we moved into stunt rehearsals. The major story points stayed the same, but the specifics of how the action flowed were worked out on the actual location with stunt performers. This was really important because the physical realities of the location and what real performers can do dont always match up with previs. After that, we shot stunt viz for all the key action moments.Then we moved into principal photography, which closely followed the stunt viz, though we made some minor adjustments based on how things were playing out on the day. We shot clean plates and alternate angles to cover any changes that might come up in post-production. There was a good amount of restructuring done during post, but we managed to pull it off by using a mix of plates and full CG shots. The biggest technical challenge was choreographing all the multi-character fight beats. These had to be hand-animated to make sure they didnt just feel like actors in rubber suits. Simone Coco and his team did an amazing job bringing life and character to those fight moments.Can you elaborate on the collaboration between the VFX team and the stunt coordinators for the various battle scenes?John Moffatt // As with Practical SFX we worked closely with the Stunt team. Stunts often produce Stunt Viz during Prep which serves as a great guid for action beats in the movie. Jim Churchman and Jake Tomuri were a joy to work with and we had a good relationship. For me and I think most folk who do this job its about choosing the best approach for each shot, or sometimes elements within a shot. So for example, if the Stunt team can do a practical wire gag and the effect can be in camera, well do that. We will then be on hand to remove the rigs or padding, but ultimately it results in a better on screen reality. Its all a conversation geared towards creating the best finished result.How did you approach the scenes where Venom and the symbiotes morph and transform in real-time during action sequences?AharonBourland // Our approach to the transformations was to keep them feeling natural and organic within the action, rather than drawing too much attention to them. We didnt want the transformations to feel like a separate momentthey should just flow with the action. On the technical side, we created a toolkit of ingredients that we could remix to achieve different styles of transformations. This involved using layers of procedurally generated geometry in Houdini, which gave us the flexibility to create the various morphs as they happened in real-time.In terms of lighting and color, how did you differentiate Venoms darker, grittier tone from the other symbiotes and the Xenophage, especially in battle scenes?AharonBourland // Our overall lighting philosophy was to keep things as photographic and grounded as possible. We didnt want to over-light the characters just for claritysometimes, we let them fall into shadow or silhouette if thats what the plates called for. This helped give the fantastical characters a more real, tangible feel. It was important to avoid that cartoonish look, especially with all the brightly colored comic book characters running around. By keeping the lighting more natural, we made sure Venoms darker, grittier tone stood out, while also differentiating him from the other symbiotes and the Xenophage, especially during the intense battle scenes.What role did previs play in developing the visual effects for the large-scale symbiote battles?John Moffatt // We Previsd a lot of this movie. But as mentioned above things evolve during the post production stage of movies. During the actor and writers strike we worked on Previs for the battle sequence which we shot as soon as production resumed. Kelly had already written it so it wasnt affected by the strike but we wanted to hit the resumption of filming with a solid plan. So It played a big part.AharonBourland // Previs played a huge role in developing the VFX-heavy sequences. It helped us map out the complex action early on. But honestly, post-vis was just as essential, maybe even more so. We used post-vis extensively to work through notes that came up in editorial and to refine the scenes. This really helped give vendors a clear sense of direction once they began tackling the shots in detail.With the introduction of Knull, how did the team conceptualize and design him and his prison?AharonBourland // We went straight to the comic source material to bring Knull to life. Our goal was to make him feel like he stepped right out of the comics, with just a few tweaks to his eyes and mouth to help with expression and clarity of speech. We wanted fans to feel like this was the Knull they knew, just in cinematic form. His prison, though, gave us more room for interpretation. We drew a lot from The King In Black series and then evolved those visuals using techniques wed developed for Venoms goo in the previous films. This approach helped integrate Knulls world into Venoms cinematic language and really cemented their connection.Were there any unexpected technical or creative challenges encountered during the production?John Moffatt // Not really I think that we have all learned that story evolves as production does and Visual effects role is to facilitate that. Id be more surprised these days if things were not challenging or unexpected.AharonBourland // There were definitely some unexpected challengeswhat would filmmaking be without them? Most were creative changes, but we had a big technical surprise with the river tank. Since it was heated, it produced a massive, unexpected amount of steam, which we then had to clean up in post. Another interesting challenge was designing the green symbiote that bonds with Mulligan. Kelly had a unique vision for himshe wanted a snake-like, semi-transparent water god look. Dave Lee and his team at DNEG did a fantastic job bringing that vision to life. Challenges like these are just part of the process, and honestly, theyre what make it all so interesting.Looking back on the project, what aspects of the visual effects are you most proud of?John Moffatt // I like the Wraith Venom performance in the desert and during the third act when Eddie and Venom decide to make the ultimate sacrifice.AharonBourland // When I look back on the project, there are two things Im especially proud of. First, the work that ILM did on the river tank is just amazing. They extended the tank seamlessly, both above and below water, in a way that really helped expand the world of the movie. The second thing is the Venomphage. Animating six characters at once, while making sure they performed a heartfelt goodbye, was a huge challenge. On top of that, we had to layer in fluid effects to simulate the burning acid that Venom uses to sacrifice himselfit was complex but really rewarding.How long have you worked on this show?John Moffatt // 22 months.AharonBourland // I was on the show for about a yearWhats the VFX shots count?John Moffatt // 1285.What is your next project?John Moffatt // Walking my dog.AharonBourland // Right now, Im working on a few things, but Id love to take on a Swamp Thing movie at some point. Though, Im probably going to take a bit of a break and recharge.A big thanks for your time.WANT TO KNOW MORE?Digital Domain: Dedicated page about Venom: The Last Dance on Digital Domain website.DNEG: Dedicated page about Venom: The Last Dance on DNEG website.ILM: Dedicated page about Venom: The Last Dance on ILM website.Rodeo FX: Dedicated page about Venom: The Last Dance on Rodeo FX website. Vincent Frei The Art of VFX 2024
    0 Comments ·0 Shares ·153 Views
  • Generative AI in media and entertainment
    www.fxguide.com
    SimulonIn this new Field Guide to Generative AI, fxguides Mike Seymour, working with NVIDIA, unpacks the impact of generative AI on the media and entertainment industries, offering practical applications, ethical considerations, and a roadmap for the future.The field guide draws on interviews with experts atPlus, expertise from visual effects researchers at Wt FX & Pixar.This comprehensive guide is a valuable resource for creatives, technologists, and producers looking to harness the transformative power of AI in a respectful and appropriate fashion.Generative AI in Media and Entertainment, a New Creative Era: Field GuideClick here to download the field guide (free).Generative AI has become one of the most transformative technologies in media and entertainment, offering tools that dont merely enhance workflows but fundamentally change how creative professionals approach their craft. This class of AI, capable of creating entirely new content from images and videos to scripts and 3D assetsrepresents a paradigm shift in storytelling and production.As the field guide notes, this revolution stems from the nexus of new machine learning approaches, foundational models, and advanced NVIDIA accelerated computing, all combined with impressive advances in neural networks and data science.NVIDIAFrom enhancement AI to creation GenAIWhile traditional AI, such as Pixars machine learning denoiser in RenderMan, has been used to optimize production pipelines, generative AI takes a step further by creating original outputs. Dylan Sisson of Pixar notes that their denoiser has transformed our entire production pipeline and was first used on Toy Story 4, touching every pixel you see in our films.However, generative AIs ability to infer new results from vast data sets opens doors to new innovations, building and expanding peoples empathy and skills. Naturally it also has raised concerns, about artists rights, providence of training data and possible job losses as production pipelines incorporate this new technology. The challenge is to ethically incorporate these new technologies and the field guide aims to show companies that have been doing just that.RunwayBreakthrough applicationsGenerative models, including GANs (Generative Adversarial Networks), diffusion-based approaches, and transformers, underpin these advancements in generative AI. These technologies are not well understood by many producers and clients, yet companies that dont explore how to use them could well be at an enormous disadvantage.Generative AI tools like Runway Gen-3 are redefining how cinematic videos are created, offering functionalities such as text-to-video and image-to-video generation with advanced camera controls. From the beginning, we built Gen-3 with the idea of embedding knowledge of those words in the way the model was trained, explains Cristbal Valenzuela, CEO of Runway. This allows directors and artists to guide outputs with industry-specific terms like 50mm lens or tracking shot.Similarly, Adobe Firefly integrates generative AI across its ecosystem, allowing users to tell Photoshop what they want and having it comply through generative fill capabilities. Fireflys ethical training practices ensure that it only uses datasets that are licensed or within legal frameworks, guaranteeing safety for commercial use.New companies like Simulon are also leveraging generative AI to streamline 3D integration and visual effects workflows. According to Simulon co-founder Divesh Naidoo, Were solving a fragmented, multi-skill/multi-tool workflow that is currently very painful, with a steep learning curve, and streamlining it into one cohesive experience. By reducing hours of work to minutes, Simulon allows for rapid integration of CGI into handheld mobile footage, enhancing creative agility for smaller teams.BriaEthical frameworks and creative controlThe rapid adoption of generative AI has raised critical concerns around ethics, intellectual property, and creative control. The industry has made strides in addressing these issues. Adobe Firefly and Getty Images stand out for their transparent practices. Rather than ask if one has the rights to use a GenAI image, the better question is, can I use these images commercially, and what level of legal protection are you offering me if I do? asks Gettys Grant Frarhall. Getty provides full legal indemnification for its customers, ensuring ethical use of its proprietary training sets.Synthesia, which creates AI-driven video presenters, has similarly embedded an ethical AI framework into its operations, adhering to the ISO Standard 42001. Co-founder Alexandru Voica emphasizes, We use generative AI to create these avatars the diffusion model adjusts the avatars performance, the facial movements, the lip sync, and eyebrowseverything to do with the face muscles. This balance of automation and user control ensures that artists remain at the center of the creative process.Wonder StudiosTraining data and provenanceThe quality and source of training data remain pivotal. As noted in the field guide, It can sometimes be wrongly assumed that in every instance, more data is goodany data, just more of it. Actually, there is a real skill in curating training data. Companies like NVIDIA and Adobe use carefully curated datasets to mitigate bias and ensure accurate results. For instance, NVIDIAs Omniverse Replicator generates synthetic data to simulate real-world environments, offering physically accurate 3D objects with accurate physical properties for training AI systems, and it fully trained appropriately.This attention to data provenance extends to protecting artists rights. Getty Images compensates contributors whose work is included in training sets, ensuring ethical collaboration between creators and AI developers.BriaExpanding possibilitiesGenerative AI is not a one-button-press solution but a dynamic toolset that empowers artists to innovate while retaining creative control. As highlighted in the guide, Empathy cannot be replaced; knowing and understanding the zeitgeist or navigating the subtle cultural and social dynamics of our times cannot be gathered from just training data. These things come from people.However, when used responsibly, generative AI accelerates production timelines, democratizes access to high-quality tools, and inspires new artistic directions. Tools like Wonder Studio automate animation workflows while preserving user control, and platforms like Shutterstocks 3D asset generators provide adaptive, ethically trained models for creative professionals.Adobe FireflyThe future of generative AIThe industry is just beginning to explore the full potential of generative AI. Companies like NVIDIA are leading the charge with solutions like the Avatar Cloud Engine (ACE), which integrates tools for real-time digital human generation. At the heart of ACE is a set of orchestrated NIM Microservices that work together, explains Simon Yuen, NVIDIAs Senior Director of Digital Human Technology. These tools enable the creation of lifelike avatars and interactive characters that can transform entertainment, education, and beyond.As generative AI continues to evolve, it offers immense promise for creators while raising essential questions about ethics and rights. With careful integration and a commitment to transparency, the technology has the potential to redefine the boundaries of creativity in media and entertainment.
    0 Comments ·0 Shares ·270 Views
  • VFX Elements: Ground Destruction 3 Bursts
    www.thepixellab.net
    Want More?VFX Elements BundleGet Destruction Debris, Ground Destruction 1-3, Portals, Blood Hits, Smoke Plumes 1-4, VDB Fire 1-3, Fireworks, Aerial Shockwaves, Ground Shockwaves, Atmospherics 1-2, Smoke Trails 1-3, Wispy Smoke, Smoke Bursts, Sparks and Meteors together and save over 20%! These are thousands of professional VFX elements for your next project!Learn MoreWant More?VDB BUNDLEIf you want to save 25% off of ALL the VDBs check out our VDB Bundle. It includes everything: VDB Clouds Packs 1-11, Smoke Packs 1-4, Explosions Packs 1-8, and VDB Fire Packs 1-2 with a savings of over 25%.Learn More
    0 Comments ·0 Shares ·190 Views
  • Eplus3D and Mve Partner on 3D Printed Titanium Frame for E-Bikes
    3dprintingindustry.com
    Eplus3D, a metal additive manufacturing company, has partnered with German bicycle manufacturer Mve to develop the Mve Avian, an e-bike with a titanium frame built using 3D printed lugs in a monocoque structure. Using the large-format EP-M650 printer, this project achieved full battery integration within the frame, addressing complex design and production challenges through additive manufacturing (AM) and setting a new precedent for the bicycle industry.Mve, founded in Thuringia, Germany, in 1897, faced substantial obstacles in developing a titanium frame with high performance and cost efficiency. Conventional methods, including hydroforming, required specialized tooling and could not meet Mves low-tolerance specifications. Eplus3Ds EP-M650 printer, which employs Metal Powder Bed Fusion (MPBF) technology, was integral in developing custom titanium lugs and connectors without extensive tooling. The machines large-format capabilities allowed Mve to eliminate costly tooling and reduce the projects timeline by an estimated six months.The Mve Avian e-bike features a 3D-printed titanium frame developed through the partnership with Eplus3D. Photo via Eplus3D.Enis Jost, Deputy General Manager at Eplus3D, explained the significance of this approach: The cost structure is mainly determined by the uptime of the system, the maturity of the processing parameters used, and the printing speed associated with these. Since only a few grams of material are used to create the lugs, and with the increased system productivity Eplus3Ds machines provide, the raw material cost is of less effect in this case.One of the primary challenges in using titanium for 3D printing is the need for support structures, which add material costs and complicate post-processing. Eplus3D minimized the support material for each part by fine-tuning print parameters, reducing post-production time and material waste. Mve then assembled the titanium lugs using an adhesive process, avoiding welding and maintaining structural integrity while keeping the Mve Avians frame weight at 11.8 kg.The modular structure of the Mve Avians titanium frame. Photo via Eplus3D.Surface treatment posed additional technical challenges. Mve applied abrasive blasting to achieve a uniform matte finish without altering the materials strength, a choice that aligns with Mves functional and aesthetic design goals. According to the original project goals, Metal Powder Bed Fusion (MPBF) technology combined with high-strength Ti6Al4V titanium alloy was identified as the only way to meet the design, performance, and cost requirements.As Jost observed, The potential of AM in the bicycle industry can be fully explored when traditional manufacturers start to design for process and user customization, similar to other emotionally connected devices such as cars and motorcycles. By replacing traditional tooling with additive manufacturing, Mve has achieved both a durable and environmentally sustainable product.A detailed view of the Mve Avian e-bikes components. Photo via Eplus3D.Innovations in 3D Printing for Bicycle ManufacturingThe Eplus3D-Mve project is part of an industry shift toward additive manufacturing in bicycle production. INTENSE Cycles, collaborating with TRUMPF and Elementum 3D, recently redesigned the M1 downhill bikes backbone using a 3D-printed aluminum alloy with internal ribbing to improve the suspension. This configuration, created from A6061-RAM2 aluminum alloy, allows for structural features that machining cannot achieve.Lehvoss Group has also partnered with Isoco Bikes to develop the Isoco X1 e-bike, which uses a 100% recyclable thermoplastic frame. By integrating injection-molded thermoplastics, Isoco reduced the frames carbon footprint by 68% compared to aluminum while maintaining durability. The material can be recycled and reused in new high-quality components, supporting sustainability.3D printed brake lever. Photo via TRUMPF.Your voice matters in the 2024 3D Printing Industry Awards. Vote Now!What will the future of 3D printing look like?Which recent trends are driving the 3D printing industry, as highlighted by experts?Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights.Stay connected with the latest in 3D printing by following us on Twitter and Facebook, and dont forget to subscribe to the 3D Printing Industry YouTube channel for more exclusive content.Featured image shows a detailed view of the Mve Avian e-bikes components. Photo via Eplus3D.
    0 Comments ·0 Shares ·139 Views
  • What should a professional VFX gym have?
    realtimevfx.com
    Ive been working as an indie VFX artist for three years now and learned everything myself online. Ive unfortunately never had the chance to work with another VFX artist yet and I know I have some gaps in my knowledge, despite doing my best to fill them in.Recently at work, weve been thinking of ways to help me set up my gym for better workflow. As it is, to playtest I need to either set up looping VFX I drop in my gym and iterate on, manually changing their status using Events and booleans, or I drop them into game scenes to do the same thing with the proper visuals surrounding the VFX.I can also swap out a given enemys VFX with the one Im working on and be able to iterate its behaviour as the enemy uses it. For projectiles, one of our programers has given me a basic script I can put on my vfx object that launches it and will call the proper events upon touching a collision surface.Mostly, I work on things fairly piecemeal and then adjust them all once the code comes in and lets them function. These days, there are rarely any problems, since Ive streamlined a lot of things.However, we all agree that this is a clunky way of working and are willing to put the energy into making a proper gym that would allow me to iterate far more fluidly and be able to present the proper behaviour of all my VFX as needed, without needing to wrestle with the game to try and get things to work.As an asside, I dont have any issues with the environmental VFX I create. The gym works just fine for them, since they dont have particularly complex behaviours and can be showcased easily. What I need help with is with VFX that are interactible, ex: pickups, projectiles, beams, traps, etc.Ive tried looking things up online, but I cant find anyone actually talking about their gym setups. It just brings up actual physical exercises or things relating to movie/tv VFX artists, which I am not. I want to know what people have access to! Does everyone need to interface awkwardly with their games, or are there entire gyms set up to be miniature microcosms of gameplay where iteration can be rapid? What do you have access to, if you do have some of these features? And is there anything else you consider should be part of a gym, even really basic knowledge? What can be expected of AAA gym setups versus smaller studios, or even setups individual artists have created on their own, for their own purposes?I would love answers to my questions, but I know wider answers will absolutely benefit others if they can find this page.So, what kind of features do you think a good VFX gym should have?
    0 Comments ·0 Shares ·159 Views