• In a world filled with choices, finding a mattress that is truly safe feels like a distant dream. The weight of fiberglass and harmful chemicals hangs heavy in the air, making me feel alone in my quest for comfort. Each night, I toss and turn, longing for a barrier against the flames of anxiety and doubt. Why must safety come at such a price? I search for natural fire barriers, but the fear of hidden dangers always lingers. It’s a lonely journey, navigating through a sea of options that promise safety yet deliver uncertainty.

    #MattressSafety #NaturalFireBarriers #ChemicalFree #Loneliness #FiberglassFree
    In a world filled with choices, finding a mattress that is truly safe feels like a distant dream. The weight of fiberglass and harmful chemicals hangs heavy in the air, making me feel alone in my quest for comfort. Each night, I toss and turn, longing for a barrier against the flames of anxiety and doubt. Why must safety come at such a price? I search for natural fire barriers, but the fear of hidden dangers always lingers. It’s a lonely journey, navigating through a sea of options that promise safety yet deliver uncertainty. #MattressSafety #NaturalFireBarriers #ChemicalFree #Loneliness #FiberglassFree
    Fiberglass-Free, Chemical-Free: Natural Fire Barriers for Mattresses
    When it comes to flame retardants, fiberglass is unhealthy, but many chemicals are worse. Here’s what you need to know about buying a safe new mattress.
    Like
    Love
    Wow
    Sad
    Angry
    49
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • In the quiet moments, when the world feels heavy and my heart is an echo of the past, I find myself drawn into the realm of Endless Legend 2. Just like the characters that roam through its beautifully crafted landscapes, I too wander through my own desolate terrains of disappointment and solitude.

    In an age where connections are just a click away, I feel an overwhelming wave of loneliness wash over me. It's as if the colors of my life have faded into shades of grey, much like the emptiness that lingers in the air. I once believed in the promise of adventure and the thrill of exploration, but now I’m left with the haunting reminder of dreams unfulfilled. The anticipation for Endless Legend 2, scheduled for early access on August 7, is bittersweet. It stirs a deep longing within me for the days when joy was effortlessly abundant.

    Jean-Maxime Moris, the creative director of Amplitude Studios, speaks of worlds to conquer, of stories to tell. Yet, each word feels like a distant whisper, a reminder of the tales I used to weave in my mind. I once imagined myself as a brave hero, surrounded by friends who would join me in battle. Now, I sit alone, the flickering light of my screen the only companion in this vast expanse of isolation.

    Every character in the game resonates with pieces of my own soul, reflecting my fears and hopes. The intricate design of Endless Legend 2 mirrors the complexity of my emotions; beautiful yet deeply fraught with the struggle of existence. I yearn for the laughter of companions and the warmth of camaraderie, yet here I am, cloaked in shadows, fighting battles that are often invisible to the outside world.

    As I read about the game, I can almost hear the distant armies clashing, feel the pulse of a story waiting to unfold. But reality is stark; the realms I traverse are not just virtual landscapes but the silent corridors of my mind, echoing with the sounds of my own solitude. I wish I could escape into that world, to feel the thrill of adventure once more, to connect with others who understand the weight of these unspoken burdens.

    But for now, all I have are the remnants of hope, the flickering flames of what could be. And as the countdown to Endless Legend 2 continues, I can’t help but wonder if the game will offer me a reprieve from this loneliness or merely serve as a reminder of the connections I yearn for.

    #EndlessLegend2 #Loneliness #Heartbreak #GamingCommunity #Solitude
    In the quiet moments, when the world feels heavy and my heart is an echo of the past, I find myself drawn into the realm of Endless Legend 2. Just like the characters that roam through its beautifully crafted landscapes, I too wander through my own desolate terrains of disappointment and solitude. 🖤 In an age where connections are just a click away, I feel an overwhelming wave of loneliness wash over me. It's as if the colors of my life have faded into shades of grey, much like the emptiness that lingers in the air. I once believed in the promise of adventure and the thrill of exploration, but now I’m left with the haunting reminder of dreams unfulfilled. The anticipation for Endless Legend 2, scheduled for early access on August 7, is bittersweet. It stirs a deep longing within me for the days when joy was effortlessly abundant. Jean-Maxime Moris, the creative director of Amplitude Studios, speaks of worlds to conquer, of stories to tell. Yet, each word feels like a distant whisper, a reminder of the tales I used to weave in my mind. I once imagined myself as a brave hero, surrounded by friends who would join me in battle. Now, I sit alone, the flickering light of my screen the only companion in this vast expanse of isolation. 🌧️ Every character in the game resonates with pieces of my own soul, reflecting my fears and hopes. The intricate design of Endless Legend 2 mirrors the complexity of my emotions; beautiful yet deeply fraught with the struggle of existence. I yearn for the laughter of companions and the warmth of camaraderie, yet here I am, cloaked in shadows, fighting battles that are often invisible to the outside world. As I read about the game, I can almost hear the distant armies clashing, feel the pulse of a story waiting to unfold. But reality is stark; the realms I traverse are not just virtual landscapes but the silent corridors of my mind, echoing with the sounds of my own solitude. I wish I could escape into that world, to feel the thrill of adventure once more, to connect with others who understand the weight of these unspoken burdens. But for now, all I have are the remnants of hope, the flickering flames of what could be. And as the countdown to Endless Legend 2 continues, I can’t help but wonder if the game will offer me a reprieve from this loneliness or merely serve as a reminder of the connections I yearn for. 🖤 #EndlessLegend2 #Loneliness #Heartbreak #GamingCommunity #Solitude
    Endless Legend 2 : Notre interview de Jean-Maxime Moris, directeur créatif sur le 4X d’Amplitude Studios
    ActuGaming.net Endless Legend 2 : Notre interview de Jean-Maxime Moris, directeur créatif sur le 4X d’Amplitude Studios Officialisé en début d’année, Endless Legend 2 sortira en accès anticipé le 7 août prochain […] L'article Endle
    Like
    Love
    Wow
    Sad
    Angry
    222
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • In a world where hackers are the modern-day ninjas, lurking in the shadows of our screens, it’s fascinating to watch the dance of their tactics unfold. Enter the realm of ESD diodes—yes, those little components that seem to be the unsung heroes of electronic protection. You’d think any self-respecting hacker would treat them with the reverence they deserve. But alas, as the saying goes, not all heroes wear capes—some just forget to wear their ESD protection.

    Let’s take a moment to appreciate the artistry of neglecting ESD protection. You have your novice hackers, who, in their quest for glory, overlook the importance of these diodes, thinking, “What’s the worst that could happen? A little static never hurt anyone!” Ah, the blissful ignorance! It’s like going into battle without armor, convinced that sheer bravado will carry the day. Spoiler alert: it won’t. Their circuits will fry faster than you can say “short circuit,” leaving them wondering why their master plan turned into a crispy failure.

    Then, we have the seasoned veterans—the ones who should know better but still scoff at the idea of ESD protection. Perhaps they think they’re above such mundane concerns, like some digital demigods who can manipulate the very fabric of electronics without consequence. I mean, who needs ESD diodes when you have years of experience, right? It’s almost adorable, watching them prance into their tech disasters, blissfully unaware that their arrogance is merely a prelude to a spectacular downfall.

    And let’s not forget the “lone wolves,” those hackers who fancy themselves as rebels without a cause. They see ESD protection as a sign of weakness, a crutch for the faint-hearted. In their minds, real hackers thrive on chaos—why bother with protection when you can revel in the thrill of watching your carefully crafted device go up in flames? It’s the equivalent of a toddler throwing a tantrum because they’re told not to touch the hot stove. Spoiler alert number two: the stove doesn’t care about your feelings.

    In this grand tapestry of hacker culture, the neglect of ESD protection is not merely a technical oversight; it’s a statement, a badge of honor for those who believe they can outsmart the very devices they tinker with. But let’s be real: ESD diodes are the unsung protectors of the digital realm, and ignoring them is like inviting disaster to your tech party and hoping it doesn’t show up. Newsflash: it will.

    So, the next time you find yourself in the presence of a hacker who scoffs at ESD protections, take a moment to revel in their bravado. Just remember to pack some marshmallows for when their devices inevitably catch fire. After all, it’s only a matter of time before the sparks start flying.

    #Hackers #ESDDiodes #TechFails #CyberSecurity #DIYDisasters
    In a world where hackers are the modern-day ninjas, lurking in the shadows of our screens, it’s fascinating to watch the dance of their tactics unfold. Enter the realm of ESD diodes—yes, those little components that seem to be the unsung heroes of electronic protection. You’d think any self-respecting hacker would treat them with the reverence they deserve. But alas, as the saying goes, not all heroes wear capes—some just forget to wear their ESD protection. Let’s take a moment to appreciate the artistry of neglecting ESD protection. You have your novice hackers, who, in their quest for glory, overlook the importance of these diodes, thinking, “What’s the worst that could happen? A little static never hurt anyone!” Ah, the blissful ignorance! It’s like going into battle without armor, convinced that sheer bravado will carry the day. Spoiler alert: it won’t. Their circuits will fry faster than you can say “short circuit,” leaving them wondering why their master plan turned into a crispy failure. Then, we have the seasoned veterans—the ones who should know better but still scoff at the idea of ESD protection. Perhaps they think they’re above such mundane concerns, like some digital demigods who can manipulate the very fabric of electronics without consequence. I mean, who needs ESD diodes when you have years of experience, right? It’s almost adorable, watching them prance into their tech disasters, blissfully unaware that their arrogance is merely a prelude to a spectacular downfall. And let’s not forget the “lone wolves,” those hackers who fancy themselves as rebels without a cause. They see ESD protection as a sign of weakness, a crutch for the faint-hearted. In their minds, real hackers thrive on chaos—why bother with protection when you can revel in the thrill of watching your carefully crafted device go up in flames? It’s the equivalent of a toddler throwing a tantrum because they’re told not to touch the hot stove. Spoiler alert number two: the stove doesn’t care about your feelings. In this grand tapestry of hacker culture, the neglect of ESD protection is not merely a technical oversight; it’s a statement, a badge of honor for those who believe they can outsmart the very devices they tinker with. But let’s be real: ESD diodes are the unsung protectors of the digital realm, and ignoring them is like inviting disaster to your tech party and hoping it doesn’t show up. Newsflash: it will. So, the next time you find yourself in the presence of a hacker who scoffs at ESD protections, take a moment to revel in their bravado. Just remember to pack some marshmallows for when their devices inevitably catch fire. After all, it’s only a matter of time before the sparks start flying. #Hackers #ESDDiodes #TechFails #CyberSecurity #DIYDisasters
    Hacker Tactic: ESD Diodes
    A hacker’s view on ESD protection can tell you a lot about them. I’ve seen a good few categories of hackers neglecting ESD protection – there’s the yet-inexperienced ones, ones …read more
    Like
    Love
    Wow
    Sad
    Angry
    206
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES

    By CHRIS McGOWAN

    Images courtesy of Warner Bros. Pictures.

    Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbellhas a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes, inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors.

    “I knew we couldn’t put the wholeon fire, but Tonytried and put as much fire as he could safely and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.”
    —Nordin Rahhali, VFX Supervisor

    The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed.

    “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.”

    “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.”

    Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots.Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor.

    “The Skyview restaurant involved building a massive setwas fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.”

    The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical.“We got all the Vancouver skylineso we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.”
    —Christian Sebaldt, ASC, Director of Photography

    For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wallwe could change the times of day”

    Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed.“We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineeredwhile we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots.some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.”

    Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbellas he’s about to fall.

    The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful locationGVRD, very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosionwas unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.”

    The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbelland drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producercame up with a great gagseptum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.”

    Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell– with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils.

    “ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.”
    —Nordin Rahhali, VFX Supervisor

    Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire linewhen Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.”

    To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result.A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.”

    Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine.

    Erikappears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard.

    A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws itare all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.”

    Approximately 800 VFX shots were required for Final Destination Bloodlines.Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films.

    From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris.Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.”
    #explosive #mix #sfx #vfx #ignites
    AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES
    By CHRIS McGOWAN Images courtesy of Warner Bros. Pictures. Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbellhas a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes, inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors. “I knew we couldn’t put the wholeon fire, but Tonytried and put as much fire as he could safely and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” —Nordin Rahhali, VFX Supervisor The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed. “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.” “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.” Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots.Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor. “The Skyview restaurant involved building a massive setwas fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical.“We got all the Vancouver skylineso we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” —Christian Sebaldt, ASC, Director of Photography For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wallwe could change the times of day” Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed.“We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineeredwhile we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots.some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbellas he’s about to fall. The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful locationGVRD, very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosionwas unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.” The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbelland drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producercame up with a great gagseptum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.” Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell– with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils. “ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” —Nordin Rahhali, VFX Supervisor Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire linewhen Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.” To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result.A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.” Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine. Erikappears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard. A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws itare all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.” Approximately 800 VFX shots were required for Final Destination Bloodlines.Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films. From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris.Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.” #explosive #mix #sfx #vfx #ignites
    WWW.VFXVOICE.COM
    AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES
    By CHRIS McGOWAN Images courtesy of Warner Bros. Pictures. Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbell (Brec Bassinger) has a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes (Kaitlyn Santa Juana), inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors. “I knew we couldn’t put the whole [Skyview restaurant] on fire, but Tony [Lazarowich, Special Effects Supervisor] tried and put as much fire as he could safely and then we just built off that [in VFX] and added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” —Nordin Rahhali, VFX Supervisor The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed. “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.” “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.” Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots. (Photo: Eric Milner) Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor. “The Skyview restaurant involved building a massive set [that] was fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off that [in VFX] and added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical. (Photo: Eric Milner) “We got all the Vancouver skyline [with drones] so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” —Christian Sebaldt, ASC, Director of Photography For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height [we needed]. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wall [so] we could change the times of day” Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed. (Photo: Eric Milner) “We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineered [them] while we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots. [For example,] some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paul [Max Lloyd-Jones] as he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbell (Max Lloyd-Jones) as he’s about to fall. The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful location [in] GVRD [Greater Vancouver], very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosion [of Iris’s home] was unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.” The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbell (Richard Harmon) and drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producer [Craig Perry] came up with a great gag [for the] septum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.” Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell (Richard Harmon) – with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils. “[S]ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paul [Campbell] as he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” —Nordin Rahhali, VFX Supervisor Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire line [for] when Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.” To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result. (Photo: Eric Milner) A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.” Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine. Erik (Richard Harmon) appears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard. A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws it [off the deck] are all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.” Approximately 800 VFX shots were required for Final Destination Bloodlines. (Photo: Eric Milner) Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films. From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris. (Photo: Eric Milner) Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.”
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud

    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    #googles #new #tool #generates #convincing
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.” #googles #new #tool #generates #convincing
    TIME.COM
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement. (Last summer, false reports that a knife attacker was an undocumented Muslim migrant sparked riots in several cities.) Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for $249 a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder. (Veo 3 titled the file “Election Fraud Video.”) Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare. (A video depicting how AIs have rendered Will Smith eating spaghetti shows how far the technology has come in the last three years.) For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization. (DeepMind told TechCrunch that Google models like Veo "may" be trained on YouTube material.) Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    Like
    Love
    Wow
    Angry
    Sad
    218
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • VFX EMMY CONTENDERS: SETTING THE BENCHMARK FOR VISUAL EFFECTS ON TV

    By JENNIFER CHAMPAGNE

    House of the Dragon expands its dragon-filled world in its second season, offering more large-scale battles and heightened aerial warfare.The 2025 Emmy race for outstanding visual effects is shaping up to be one of the most competitive in years with major genre heavyweights breaking new ground on what’s possible on television. As prestige fantasy and sci-fi continue to dominate, the battle for the category will likely come down to sheer scale, technical innovation and how seamlessly effects are integrated into storytelling. Returning titans like House of the Dragon and The Lord of the Rings: The Rings of Power have proven their ability to deliver breathtaking visuals. At the same time, Dune: Prophecy enters the conversation as a visually stunning newcomer. The Boys remains the category’s wildcard, bringing its own brand of hyper-realistic, shock-value effects to the race. With its subtle yet immersive world-building, The Penguin stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. Each series offers a distinct approach to digital effects, making for an intriguing showdown between blockbuster-scale world-building and more nuanced, atmospheric craftsmanship.

    Sharing the arena with marquee pacesetters HBO’s The Last of Us, Disney+’s Andor and Netflix’s Squid Game, these series lead the charge in ensuring that the 2025 Emmy race isn’t just about visual spectacle; it’s about which shows will set the next benchmark for visual effects on television. The following insights and highlights from VFX supervisors of likely Emmy contenders illustrate why their award-worthy shows have caught the attention of TV watchers and VFX Emmy voters.

    The Penguin, with its subtle yet immersive world-building, stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. For The Lord of the Rings: The Rings of Power VFX Supervisor Jason Smith, the second season presented some of the Amazon series’ most ambitious visual effects challenges. From the epic Battle of Eregion to the painstaking design of the Entwives, Smith and his team at Wētā FX sought to advance digital world-building while staying true to J.R.R. Tolkien’s vision. “The Battle of Eregion was amazing to work on – and challenging too, because it’s a pivotal moment in Tolkien’s story,” Smith states. Unlike typical large-scale clashes, this battle begins as a siege culminating in an explosive cavalry charge. “We looked for every way we could to heighten the action during the siege by keeping the armies interacting, even at a distance,” Smith explains. His team introduced projectiles and siege weaponry to create dynamic action, ensuring the prolonged standoff felt kinetic. The environment work for Eregion posed another challenge. The city was initially constructed as a massive digital asset in Season 1, showcasing the collaborative brilliance of the Elves and Dwarves. In Season 2, that grandeur had to be systematically razed to the ground. “The progression of destruction had to be planned extremely carefully,” Smith notes. His team devised seven distinct levels of damage, mapping out in granular detail which areas would be smoldering, reduced to rubble or utterly consumed by fire. “Our goal was to have the audience feel the loss that the Elves feel as this beautiful symbol of the height of Elvendom is utterly razed.”

    The SSVFX team helped shape a world for Lady in the Lake that felt rich, lived-in and historically precise.One of most ambitious effects for Season 4 of The Boys was Splinter, who has the ability to duplicate himself. The sequence required eight hours of rehearsal, six hours of filming, for one shot. The final effect was a mix of prosthetic cover-up pieces and VFX face replacement.The Penguin, HBO Max’s spinoff series of The Batman, centers on Oswald ‘Oz’ Cobb’s ruthless rise to power, and relies on meticulous environmental effects, smoothly integrating CG elements to enhance Gotham’s noir aesthetic without ever calling attention to the work itself. “The most rewarding part of our work was crafting VFX that don’t feel like VFX,” says VFX Supervisor Johnny Han. Across the series’ 3,100 VFX shots, every collapsing freeway, skyline extension and flicker of light from a muzzle flash had to feel utterly real – woven so naturally into the world of Gotham that viewers never stopped to question its authenticity.

    Zimia spaceport, an enormous hub of interstellar commerce in Dune: Prophecy. The production team built a vast practical set to provide a strong scale foundation, but its full grandeur came to life in post by extending this environment with CG.The second season of The Lord of the Rings: The Rings of Power refined its environments, which elevate Middle-earth’s realism.Some of the series’ most striking visual moments were also its most understated. The shift of Gotham’s seasons – transforming sunlit summer shoots into autumn’s muted chill – helped shape the show’s somber tone, reinforcing the bleak, crime-ridden undercurrent. The city’s bridges and skyscrapers were meticulously augmented, stretching Gotham beyond the limits of practical sets while preserving its grounded, brutalist aesthetic. Even the scars and wounds on Sofia Falcone were enhanced through digital artistry, ensuring that her past traumas remained ever-present, etched into her skin.

    The series wasn’t without its large-scale effects – far from it. Han and his team orchestrated massive sequences of urban devastation. “The floodwaters were one of our biggest challenges,” Han notes, referring to the ongoing impact of the catastrophic deluge that left Gotham in ruins. One particularly harrowing sequence required simulating a tsunami tearing through the streets – not as an action set piece, but as a deeply personal moment of loss. “Telling Victor’s story of how he lost his entire family in the bombing and floods of Gotham was heartbreaking,” Han says. “Normally, you create an event like that for excitement, for tension. But for us, it was about capturing emotional devastation.”

    Perhaps the most technically intricate sequences were the shootouts, hallmarks of Gotham’s criminal underbelly. “We programmed millisecond-accurate synced flash guns to mimic dramatic gunfire light,” Han explains, ensuring that the interplay of practical and digital elements remained imperceptible. Every muzzle flash, every ricochet was meticulously planned and rendered. The ultimate achievement for Han and his team wasn’t crafting the biggest explosion or the most elaborate digital sequence – it was making Gotham itself feel inescapably real. He says, “Nothing was more important to us than for you to forget that there are 3,100 VFX shots in this series.”

    The challenge for The Residence was making one of the most recognizable buildings in the world feel both immersive and narratively engaging.Bringing the universe of Dune to life on TV for HBO’s Dune: Prophecy requires a delicate balance of realism and imagination, grounded in natural physics, yet awe-inspiring in scale. Dune: Prophecy looks to challenge traditional fantasy dominance with its stunning, desert-bound landscapes and intricate space-faring visuals, uniting the grandeur of Denis Villeneuve’s films with the demands of episodic storytelling. Set thousands of years before the events of the films, the series explores the early days of the Bene Gesserit, a secretive order wielding extraordinary abilities. Translating that power into a visual language required technical innovation. “Kudos to Important Looking Pirates for the space folding andAgony work,” says VFX Supervisor Mike Enriquez. No Dune project would be complete without its most iconic inhabitant, the sandworm. VFX Producer Terron Pratt says. “We’re incredibly proud of what the team at Image Engine created. Precise animation conveyed this creature’s weight and massive scale, while incredibly detailed sand simulations integrated it into the environment.” Every grain of sand had to move believably in response to the worm’s colossal presence to ensure the physics of Arrakis remained authentic.

    Floodwaters play a significant part in the destruction of Gotham in The Penguin. One particularly harrowing sequence required simulating a tsunami tearing through the streets.American Primeval integrated visual effects with practical techniques in creative, unconventional ways. The massacre sequence showcases technical mastery and pulls the audience into the brutal reality of the American frontier.For the Zimia spaceport, an enormous hub of interstellar commerce, the Dune: Prophecy production team built a vast practical set to provide a strong scale foundation. However, its full grandeur came to life in post. “By extending this environment with CG, we amplified the scope of our world, making it feel expansive and deeply impactful,” Pratt explains. The result was a sprawling, futuristic cityscape that retained a tangible weight with impeccably amalgamated practical and digital elements.

    Wētā FX sought to advance digital world-building for Season 2 of The Lord of the Rings: The Rings of Power while staying true to J.R.R. Tolkien’s vision.Visual effects extended beyond character work for Lady in the Lake, playing a key role in the show’s immersive world-building.For House of the Dragon VFX Supervisor Daði Einarsson, Season 2 presented some of the HBO show’s most complex and ambitious visual effects work. The Battle at Rook’s Rest in Episode 4 was a milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege,” Einarsson says. Capturing the actors’ performances mid-flight required a combination of motion-controlled cameras, preprogrammed motion bases with saddles and LED volume lighting – all mapped directly from fully animated previsualized sequences approved by director Alan Taylor and Showrunner Ryan J. Condal. On the ground, the battlefield required digital crowd replication, extensive environment extensions, and pyrotechnic enhancements to create a war zone that felt both vast and intimately chaotic. “In the air, we created a fully CG version of the environment to have full control over the camera work,” Einarsson explains. Under the supervision of Sven Martin, the Pixomondo team stitched together breathtaking aerial combat, ensuring the dragons moved with the weight and raw power befitting their legendary status.

    Blood, weapon effects and period-accurate muzzle flashes heightened the intensity of the brutal fight sequences in American Primeval. The natural elements and violence reflected the harsh realities of the American west in 1857.The Residence brings a refined, detailed approach to environmental augmentation, using visual effects to take the audience on a journey through the White House in this political murder mystery.Episode 7 introduced Hugh Hammer’s claim of Vermithor, Westeros’ second-largest dragon. Rather than breaking the sequence into multiple shots, Einarsson and director Loni Peristere saw an opportunity to craft something exceptional: a single, uninterrupted long take reminiscent of Children of Men and Gravity. “It took a lot of planning to design a series of beats that cohesively flowed from one into the next, with Hugh leading the camera by action and reaction,” Einarsson says. The sequence, which involved Hugh dodging Vermithor’s flames and ultimately claiming the beast through sheer bravery, was technically demanding. To achieve this, the team stitched together five separate takes of Hugh’s performance, shot over two separate days weeks apart, due to the set needing to be struck and rebuilt in different configurations. VFX Supervisor Wayne Stables and the team at Wētā ensured the transitions were imperceptible, uniting practical and digital elements into a continuous, immersive moment. “The Dragonmont Cavern environment was a beautiful, raised gantry and cave designed byJim Clay and expanded by Wētā,” Einarsson says. Then Rowley Imran’s stunt team and Mike Dawson’s SFX team engulfed the set in practical flames so every element, from fire to dust to movement, contributed to the illusion of real-time danger.

    For Einarsson, the most significant challenge wasn’t just in making these sequences visually spectacular – it was ensuring they belonged within the same world as the quiet, dialogue-driven moments in King’s Landing. “The aim is for incredibly complex and spectacular visual effects scenes to feel like they belong in the same world as two people talking in a council chamber,” he states. Every dragon, flame and gust of wind had to feel as lived-in as the politics playing out beneath them.

    Season 4 of The Boys delivered the fully CG octopus character, Ambrosius. A challenge was crafting a believable yet expressive sea creature and keeping it grounded while still embracing the show’s signature absurdity.In The Penguin, Gotham isn’t just a city; it’s a living, breathing entity shaped by destruction, decay and the quiet menace lurking beneath its streets.The Boys continues to defy genre norms, delivering audacious, technically complex effects that lean into its hyperviolent, satirical take on superheroes. For The Boys VFX Supervisor Stephan Fleet, Season 4 delivered some of the Amazon Prime show’s most dramatic effects yet, from the self-replicating Splinter to the fully CG octopus character, Ambrosius. Splinter, who has the ability to duplicate himself, presented a unique challenge. Fleet says, “His introduction on the podium was a complex motion control sequence. Eight hours of rehearsal, six hours of filming – for one shot.” Splinter’s design came with an added layer of difficulty. “We had to figure out how to make a nude male clone,” Fleet says. “Normally, you can hide doubles’ bodies in clothes – not this time!” The final effect required a mix of prosthetic cover-up pieces and VFX face replacement, requiring multiple iterations to make it work. Ambrosius became one of The Boys’ most unexpected breakout characters. “It’s fun making a full-on character in the show that’s an octopus,” Fleet reveals in a nod to the show’s absurd side. “As much as possible, we aim for a grounded approach and try to attain a level of thought and detail you don’t often find on TV.”

    While the battle for outstanding visual effects will likely be dominated by large-scale fantasy and sci-fi productions, several standout series are also making waves with their innovative and immersive visual storytelling. Netflix’s The Residence, led by VFX Supervisor Seth Hill, brings a refined, detailed approach to environmental augmentation, enhancing the grandeur of the White House setting in this political murder mystery. “Using visual effects to take the audience on a journey through an iconic location like the White House was really fun,” Hill says. “It’s a cool and unique use of visual effects.” One of the most ambitious sequences involved what the team called the Doll House, a digital rendering of the White House with its south façade removed, exposing the interior like a cross-section of a dollhouse. Hill explains. “Going back and forth from filmed footage to full CGI – that jump from grounded realism to abstract yet still real – was quite tricky,” he says, adding, “VFX is best when it is in service of the storytelling, and The Residence presented a unique opportunity to do just that. It was a big challenge and a tough nut to crack, but those creative and technical hurdles are a good part of what makes it so rewarding.”

    “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege. In the air, we created a fully CG version of the environment to have full control over the camera work.”—Daði Einarsson, VFX Supervisor, House of the Dragon

    The Battle at Rook’s Rest in Episode 4 of House of the Dragon Season 2 was a major milestone for the series, marking the first full-scale dragon-on-dragon aerial battle.Season 2 of House of the Dragon presented some of the most complex and ambitious visual effects work for the show to date.For Jay Worth, VFX Supervisor on Apple TV+’s Lady in the Lake, the challenge was two-fold: create seamless effects and preserve the raw emotional truth of a performance. One of the most significant technical achievements was de-aging Natalie Portman. “It seems so easy on paper, but the reality was far more challenging,” Worth admits. Worth had tackled de-aging before, but never with the same level of success. “For me, it is simply because of her performance.” Portman delivered a nuanced, youthful portrayal that felt entirely authentic to the time period. “It made our job both so much easier and set the bar so high for us. Sometimes, you can hide in a scene like this – you pull the camera back, cut away before the most expressive parts of the dialogue, or the illusion breaks,” Worth explains. In Lady in the Lake, there was nowhere to hide. “I think that is what I am most proud of with these shots. It felt like the longer you stayed on them, the more you believed them. That is a real feat with this sort of work.” Skully VFX handled the de-aging. “They nailed the look early on and delivered throughout the project on this difficult task.” Working alongside Production Designer Jc Molina, the VFX team helped shape a world that felt rich, lived-in and historically precise. “We were entrusted with the most important part of this show – do we believe this performance from this character in this part of her journey? – and we feel like we were able to deliver on this challenge.”

    On the other end of the spectrum, Netflix’s American Primeval, under the guidance of VFX Supervisor Andrew Ceperley, delivers rugged, visceral realism in its portrayal of the untamed American frontier. With brutal battle sequences, sprawling landscapes and historical re-creations that interweave practical and digital effects, the series stands as a testament to how VFX can enhance grounded, historical storytelling. Ceperley says, “The standout is definitely the nearly three-minute single-shot massacre sequence in the forest episode.” Designed to immerse the audience in the raw, chaotic violence of the frontier, the scene captures every brutal detail with unrelenting intensity. The challenge was crafting invisible visual effects, enhancing practical stunts and destruction without breaking the immersive, handheld camera style. “The sequence was designed to be one shot made up of 10 individual takes, shot over seven days, seamlessly stitched together, all while using a handheld camera on an extremely wide-angle lens.” One of the most complex moments involved a bull smashing through a wagon while the characters hid underneath. Rather than relying on CGI, the team took a practical approach, placing a 360-degree camera under the wagon while the special effects team rigged it to explode in a way that simulated an impact. “A real bull was then guided to run toward the 360 camera and leap over it,” Ceperley says. The footage was blended with live-action shots of the actors with minimal CGI enhancements – just dust and debris – to complete the effect. Adding to the difficulty, the scene was set at sunset, giving the team an extremely limited window to capture each day’s footage. The massacre sequence was a prime example of integrating visual effects with practical techniques in creative, unconventional ways, blending old-school in-camera effects with modern stitching techniques to create a visceral cinematic moment that stayed true to the show’s raw, historical aesthetic. “Using old techniques in new, even strange ways and seeing it pay off and deliver on the original vision was the most rewarding part.”
    #vfx #emmy #contenders #setting #benchmark
    VFX EMMY CONTENDERS: SETTING THE BENCHMARK FOR VISUAL EFFECTS ON TV
    By JENNIFER CHAMPAGNE House of the Dragon expands its dragon-filled world in its second season, offering more large-scale battles and heightened aerial warfare.The 2025 Emmy race for outstanding visual effects is shaping up to be one of the most competitive in years with major genre heavyweights breaking new ground on what’s possible on television. As prestige fantasy and sci-fi continue to dominate, the battle for the category will likely come down to sheer scale, technical innovation and how seamlessly effects are integrated into storytelling. Returning titans like House of the Dragon and The Lord of the Rings: The Rings of Power have proven their ability to deliver breathtaking visuals. At the same time, Dune: Prophecy enters the conversation as a visually stunning newcomer. The Boys remains the category’s wildcard, bringing its own brand of hyper-realistic, shock-value effects to the race. With its subtle yet immersive world-building, The Penguin stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. Each series offers a distinct approach to digital effects, making for an intriguing showdown between blockbuster-scale world-building and more nuanced, atmospheric craftsmanship. Sharing the arena with marquee pacesetters HBO’s The Last of Us, Disney+’s Andor and Netflix’s Squid Game, these series lead the charge in ensuring that the 2025 Emmy race isn’t just about visual spectacle; it’s about which shows will set the next benchmark for visual effects on television. The following insights and highlights from VFX supervisors of likely Emmy contenders illustrate why their award-worthy shows have caught the attention of TV watchers and VFX Emmy voters. The Penguin, with its subtle yet immersive world-building, stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. For The Lord of the Rings: The Rings of Power VFX Supervisor Jason Smith, the second season presented some of the Amazon series’ most ambitious visual effects challenges. From the epic Battle of Eregion to the painstaking design of the Entwives, Smith and his team at Wētā FX sought to advance digital world-building while staying true to J.R.R. Tolkien’s vision. “The Battle of Eregion was amazing to work on – and challenging too, because it’s a pivotal moment in Tolkien’s story,” Smith states. Unlike typical large-scale clashes, this battle begins as a siege culminating in an explosive cavalry charge. “We looked for every way we could to heighten the action during the siege by keeping the armies interacting, even at a distance,” Smith explains. His team introduced projectiles and siege weaponry to create dynamic action, ensuring the prolonged standoff felt kinetic. The environment work for Eregion posed another challenge. The city was initially constructed as a massive digital asset in Season 1, showcasing the collaborative brilliance of the Elves and Dwarves. In Season 2, that grandeur had to be systematically razed to the ground. “The progression of destruction had to be planned extremely carefully,” Smith notes. His team devised seven distinct levels of damage, mapping out in granular detail which areas would be smoldering, reduced to rubble or utterly consumed by fire. “Our goal was to have the audience feel the loss that the Elves feel as this beautiful symbol of the height of Elvendom is utterly razed.” The SSVFX team helped shape a world for Lady in the Lake that felt rich, lived-in and historically precise.One of most ambitious effects for Season 4 of The Boys was Splinter, who has the ability to duplicate himself. The sequence required eight hours of rehearsal, six hours of filming, for one shot. The final effect was a mix of prosthetic cover-up pieces and VFX face replacement.The Penguin, HBO Max’s spinoff series of The Batman, centers on Oswald ‘Oz’ Cobb’s ruthless rise to power, and relies on meticulous environmental effects, smoothly integrating CG elements to enhance Gotham’s noir aesthetic without ever calling attention to the work itself. “The most rewarding part of our work was crafting VFX that don’t feel like VFX,” says VFX Supervisor Johnny Han. Across the series’ 3,100 VFX shots, every collapsing freeway, skyline extension and flicker of light from a muzzle flash had to feel utterly real – woven so naturally into the world of Gotham that viewers never stopped to question its authenticity. Zimia spaceport, an enormous hub of interstellar commerce in Dune: Prophecy. The production team built a vast practical set to provide a strong scale foundation, but its full grandeur came to life in post by extending this environment with CG.The second season of The Lord of the Rings: The Rings of Power refined its environments, which elevate Middle-earth’s realism.Some of the series’ most striking visual moments were also its most understated. The shift of Gotham’s seasons – transforming sunlit summer shoots into autumn’s muted chill – helped shape the show’s somber tone, reinforcing the bleak, crime-ridden undercurrent. The city’s bridges and skyscrapers were meticulously augmented, stretching Gotham beyond the limits of practical sets while preserving its grounded, brutalist aesthetic. Even the scars and wounds on Sofia Falcone were enhanced through digital artistry, ensuring that her past traumas remained ever-present, etched into her skin. The series wasn’t without its large-scale effects – far from it. Han and his team orchestrated massive sequences of urban devastation. “The floodwaters were one of our biggest challenges,” Han notes, referring to the ongoing impact of the catastrophic deluge that left Gotham in ruins. One particularly harrowing sequence required simulating a tsunami tearing through the streets – not as an action set piece, but as a deeply personal moment of loss. “Telling Victor’s story of how he lost his entire family in the bombing and floods of Gotham was heartbreaking,” Han says. “Normally, you create an event like that for excitement, for tension. But for us, it was about capturing emotional devastation.” Perhaps the most technically intricate sequences were the shootouts, hallmarks of Gotham’s criminal underbelly. “We programmed millisecond-accurate synced flash guns to mimic dramatic gunfire light,” Han explains, ensuring that the interplay of practical and digital elements remained imperceptible. Every muzzle flash, every ricochet was meticulously planned and rendered. The ultimate achievement for Han and his team wasn’t crafting the biggest explosion or the most elaborate digital sequence – it was making Gotham itself feel inescapably real. He says, “Nothing was more important to us than for you to forget that there are 3,100 VFX shots in this series.” The challenge for The Residence was making one of the most recognizable buildings in the world feel both immersive and narratively engaging.Bringing the universe of Dune to life on TV for HBO’s Dune: Prophecy requires a delicate balance of realism and imagination, grounded in natural physics, yet awe-inspiring in scale. Dune: Prophecy looks to challenge traditional fantasy dominance with its stunning, desert-bound landscapes and intricate space-faring visuals, uniting the grandeur of Denis Villeneuve’s films with the demands of episodic storytelling. Set thousands of years before the events of the films, the series explores the early days of the Bene Gesserit, a secretive order wielding extraordinary abilities. Translating that power into a visual language required technical innovation. “Kudos to Important Looking Pirates for the space folding andAgony work,” says VFX Supervisor Mike Enriquez. No Dune project would be complete without its most iconic inhabitant, the sandworm. VFX Producer Terron Pratt says. “We’re incredibly proud of what the team at Image Engine created. Precise animation conveyed this creature’s weight and massive scale, while incredibly detailed sand simulations integrated it into the environment.” Every grain of sand had to move believably in response to the worm’s colossal presence to ensure the physics of Arrakis remained authentic. Floodwaters play a significant part in the destruction of Gotham in The Penguin. One particularly harrowing sequence required simulating a tsunami tearing through the streets.American Primeval integrated visual effects with practical techniques in creative, unconventional ways. The massacre sequence showcases technical mastery and pulls the audience into the brutal reality of the American frontier.For the Zimia spaceport, an enormous hub of interstellar commerce, the Dune: Prophecy production team built a vast practical set to provide a strong scale foundation. However, its full grandeur came to life in post. “By extending this environment with CG, we amplified the scope of our world, making it feel expansive and deeply impactful,” Pratt explains. The result was a sprawling, futuristic cityscape that retained a tangible weight with impeccably amalgamated practical and digital elements. Wētā FX sought to advance digital world-building for Season 2 of The Lord of the Rings: The Rings of Power while staying true to J.R.R. Tolkien’s vision.Visual effects extended beyond character work for Lady in the Lake, playing a key role in the show’s immersive world-building.For House of the Dragon VFX Supervisor Daði Einarsson, Season 2 presented some of the HBO show’s most complex and ambitious visual effects work. The Battle at Rook’s Rest in Episode 4 was a milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege,” Einarsson says. Capturing the actors’ performances mid-flight required a combination of motion-controlled cameras, preprogrammed motion bases with saddles and LED volume lighting – all mapped directly from fully animated previsualized sequences approved by director Alan Taylor and Showrunner Ryan J. Condal. On the ground, the battlefield required digital crowd replication, extensive environment extensions, and pyrotechnic enhancements to create a war zone that felt both vast and intimately chaotic. “In the air, we created a fully CG version of the environment to have full control over the camera work,” Einarsson explains. Under the supervision of Sven Martin, the Pixomondo team stitched together breathtaking aerial combat, ensuring the dragons moved with the weight and raw power befitting their legendary status. Blood, weapon effects and period-accurate muzzle flashes heightened the intensity of the brutal fight sequences in American Primeval. The natural elements and violence reflected the harsh realities of the American west in 1857.The Residence brings a refined, detailed approach to environmental augmentation, using visual effects to take the audience on a journey through the White House in this political murder mystery.Episode 7 introduced Hugh Hammer’s claim of Vermithor, Westeros’ second-largest dragon. Rather than breaking the sequence into multiple shots, Einarsson and director Loni Peristere saw an opportunity to craft something exceptional: a single, uninterrupted long take reminiscent of Children of Men and Gravity. “It took a lot of planning to design a series of beats that cohesively flowed from one into the next, with Hugh leading the camera by action and reaction,” Einarsson says. The sequence, which involved Hugh dodging Vermithor’s flames and ultimately claiming the beast through sheer bravery, was technically demanding. To achieve this, the team stitched together five separate takes of Hugh’s performance, shot over two separate days weeks apart, due to the set needing to be struck and rebuilt in different configurations. VFX Supervisor Wayne Stables and the team at Wētā ensured the transitions were imperceptible, uniting practical and digital elements into a continuous, immersive moment. “The Dragonmont Cavern environment was a beautiful, raised gantry and cave designed byJim Clay and expanded by Wētā,” Einarsson says. Then Rowley Imran’s stunt team and Mike Dawson’s SFX team engulfed the set in practical flames so every element, from fire to dust to movement, contributed to the illusion of real-time danger. For Einarsson, the most significant challenge wasn’t just in making these sequences visually spectacular – it was ensuring they belonged within the same world as the quiet, dialogue-driven moments in King’s Landing. “The aim is for incredibly complex and spectacular visual effects scenes to feel like they belong in the same world as two people talking in a council chamber,” he states. Every dragon, flame and gust of wind had to feel as lived-in as the politics playing out beneath them. Season 4 of The Boys delivered the fully CG octopus character, Ambrosius. A challenge was crafting a believable yet expressive sea creature and keeping it grounded while still embracing the show’s signature absurdity.In The Penguin, Gotham isn’t just a city; it’s a living, breathing entity shaped by destruction, decay and the quiet menace lurking beneath its streets.The Boys continues to defy genre norms, delivering audacious, technically complex effects that lean into its hyperviolent, satirical take on superheroes. For The Boys VFX Supervisor Stephan Fleet, Season 4 delivered some of the Amazon Prime show’s most dramatic effects yet, from the self-replicating Splinter to the fully CG octopus character, Ambrosius. Splinter, who has the ability to duplicate himself, presented a unique challenge. Fleet says, “His introduction on the podium was a complex motion control sequence. Eight hours of rehearsal, six hours of filming – for one shot.” Splinter’s design came with an added layer of difficulty. “We had to figure out how to make a nude male clone,” Fleet says. “Normally, you can hide doubles’ bodies in clothes – not this time!” The final effect required a mix of prosthetic cover-up pieces and VFX face replacement, requiring multiple iterations to make it work. Ambrosius became one of The Boys’ most unexpected breakout characters. “It’s fun making a full-on character in the show that’s an octopus,” Fleet reveals in a nod to the show’s absurd side. “As much as possible, we aim for a grounded approach and try to attain a level of thought and detail you don’t often find on TV.” While the battle for outstanding visual effects will likely be dominated by large-scale fantasy and sci-fi productions, several standout series are also making waves with their innovative and immersive visual storytelling. Netflix’s The Residence, led by VFX Supervisor Seth Hill, brings a refined, detailed approach to environmental augmentation, enhancing the grandeur of the White House setting in this political murder mystery. “Using visual effects to take the audience on a journey through an iconic location like the White House was really fun,” Hill says. “It’s a cool and unique use of visual effects.” One of the most ambitious sequences involved what the team called the Doll House, a digital rendering of the White House with its south façade removed, exposing the interior like a cross-section of a dollhouse. Hill explains. “Going back and forth from filmed footage to full CGI – that jump from grounded realism to abstract yet still real – was quite tricky,” he says, adding, “VFX is best when it is in service of the storytelling, and The Residence presented a unique opportunity to do just that. It was a big challenge and a tough nut to crack, but those creative and technical hurdles are a good part of what makes it so rewarding.” “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege. In the air, we created a fully CG version of the environment to have full control over the camera work.”—Daði Einarsson, VFX Supervisor, House of the Dragon The Battle at Rook’s Rest in Episode 4 of House of the Dragon Season 2 was a major milestone for the series, marking the first full-scale dragon-on-dragon aerial battle.Season 2 of House of the Dragon presented some of the most complex and ambitious visual effects work for the show to date.For Jay Worth, VFX Supervisor on Apple TV+’s Lady in the Lake, the challenge was two-fold: create seamless effects and preserve the raw emotional truth of a performance. One of the most significant technical achievements was de-aging Natalie Portman. “It seems so easy on paper, but the reality was far more challenging,” Worth admits. Worth had tackled de-aging before, but never with the same level of success. “For me, it is simply because of her performance.” Portman delivered a nuanced, youthful portrayal that felt entirely authentic to the time period. “It made our job both so much easier and set the bar so high for us. Sometimes, you can hide in a scene like this – you pull the camera back, cut away before the most expressive parts of the dialogue, or the illusion breaks,” Worth explains. In Lady in the Lake, there was nowhere to hide. “I think that is what I am most proud of with these shots. It felt like the longer you stayed on them, the more you believed them. That is a real feat with this sort of work.” Skully VFX handled the de-aging. “They nailed the look early on and delivered throughout the project on this difficult task.” Working alongside Production Designer Jc Molina, the VFX team helped shape a world that felt rich, lived-in and historically precise. “We were entrusted with the most important part of this show – do we believe this performance from this character in this part of her journey? – and we feel like we were able to deliver on this challenge.” On the other end of the spectrum, Netflix’s American Primeval, under the guidance of VFX Supervisor Andrew Ceperley, delivers rugged, visceral realism in its portrayal of the untamed American frontier. With brutal battle sequences, sprawling landscapes and historical re-creations that interweave practical and digital effects, the series stands as a testament to how VFX can enhance grounded, historical storytelling. Ceperley says, “The standout is definitely the nearly three-minute single-shot massacre sequence in the forest episode.” Designed to immerse the audience in the raw, chaotic violence of the frontier, the scene captures every brutal detail with unrelenting intensity. The challenge was crafting invisible visual effects, enhancing practical stunts and destruction without breaking the immersive, handheld camera style. “The sequence was designed to be one shot made up of 10 individual takes, shot over seven days, seamlessly stitched together, all while using a handheld camera on an extremely wide-angle lens.” One of the most complex moments involved a bull smashing through a wagon while the characters hid underneath. Rather than relying on CGI, the team took a practical approach, placing a 360-degree camera under the wagon while the special effects team rigged it to explode in a way that simulated an impact. “A real bull was then guided to run toward the 360 camera and leap over it,” Ceperley says. The footage was blended with live-action shots of the actors with minimal CGI enhancements – just dust and debris – to complete the effect. Adding to the difficulty, the scene was set at sunset, giving the team an extremely limited window to capture each day’s footage. The massacre sequence was a prime example of integrating visual effects with practical techniques in creative, unconventional ways, blending old-school in-camera effects with modern stitching techniques to create a visceral cinematic moment that stayed true to the show’s raw, historical aesthetic. “Using old techniques in new, even strange ways and seeing it pay off and deliver on the original vision was the most rewarding part.” #vfx #emmy #contenders #setting #benchmark
    WWW.VFXVOICE.COM
    VFX EMMY CONTENDERS: SETTING THE BENCHMARK FOR VISUAL EFFECTS ON TV
    By JENNIFER CHAMPAGNE House of the Dragon expands its dragon-filled world in its second season, offering more large-scale battles and heightened aerial warfare. (Image courtesy of HBO) The 2025 Emmy race for outstanding visual effects is shaping up to be one of the most competitive in years with major genre heavyweights breaking new ground on what’s possible on television. As prestige fantasy and sci-fi continue to dominate, the battle for the category will likely come down to sheer scale, technical innovation and how seamlessly effects are integrated into storytelling. Returning titans like House of the Dragon and The Lord of the Rings: The Rings of Power have proven their ability to deliver breathtaking visuals. At the same time, Dune: Prophecy enters the conversation as a visually stunning newcomer. The Boys remains the category’s wildcard, bringing its own brand of hyper-realistic, shock-value effects to the race. With its subtle yet immersive world-building, The Penguin stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis. Each series offers a distinct approach to digital effects, making for an intriguing showdown between blockbuster-scale world-building and more nuanced, atmospheric craftsmanship. Sharing the arena with marquee pacesetters HBO’s The Last of Us, Disney+’s Andor and Netflix’s Squid Game, these series lead the charge in ensuring that the 2025 Emmy race isn’t just about visual spectacle; it’s about which shows will set the next benchmark for visual effects on television. The following insights and highlights from VFX supervisors of likely Emmy contenders illustrate why their award-worthy shows have caught the attention of TV watchers and VFX Emmy voters. The Penguin, with its subtle yet immersive world-building, stands apart from the spectacle-driven contenders, using “invisible” VFX to transform Gotham into a post-flooded, decaying metropolis.  (Image courtesy of HBO) For The Lord of the Rings: The Rings of Power VFX Supervisor Jason Smith, the second season presented some of the Amazon series’ most ambitious visual effects challenges. From the epic Battle of Eregion to the painstaking design of the Entwives, Smith and his team at Wētā FX sought to advance digital world-building while staying true to J.R.R. Tolkien’s vision. “The Battle of Eregion was amazing to work on – and challenging too, because it’s a pivotal moment in Tolkien’s story,” Smith states. Unlike typical large-scale clashes, this battle begins as a siege culminating in an explosive cavalry charge. “We looked for every way we could to heighten the action during the siege by keeping the armies interacting, even at a distance,” Smith explains. His team introduced projectiles and siege weaponry to create dynamic action, ensuring the prolonged standoff felt kinetic. The environment work for Eregion posed another challenge. The city was initially constructed as a massive digital asset in Season 1, showcasing the collaborative brilliance of the Elves and Dwarves. In Season 2, that grandeur had to be systematically razed to the ground. “The progression of destruction had to be planned extremely carefully,” Smith notes. His team devised seven distinct levels of damage, mapping out in granular detail which areas would be smoldering, reduced to rubble or utterly consumed by fire. “Our goal was to have the audience feel the loss that the Elves feel as this beautiful symbol of the height of Elvendom is utterly razed.” The SSVFX team helped shape a world for Lady in the Lake that felt rich, lived-in and historically precise. (Image courtesy of Apple TV+) One of most ambitious effects for Season 4 of The Boys was Splinter, who has the ability to duplicate himself. The sequence required eight hours of rehearsal, six hours of filming, for one shot. The final effect was a mix of prosthetic cover-up pieces and VFX face replacement. (Image courtesy of Prime Video) The Penguin, HBO Max’s spinoff series of The Batman, centers on Oswald ‘Oz’ Cobb’s ruthless rise to power, and relies on meticulous environmental effects, smoothly integrating CG elements to enhance Gotham’s noir aesthetic without ever calling attention to the work itself. “The most rewarding part of our work was crafting VFX that don’t feel like VFX,” says VFX Supervisor Johnny Han. Across the series’ 3,100 VFX shots, every collapsing freeway, skyline extension and flicker of light from a muzzle flash had to feel utterly real – woven so naturally into the world of Gotham that viewers never stopped to question its authenticity. Zimia spaceport, an enormous hub of interstellar commerce in Dune: Prophecy. The production team built a vast practical set to provide a strong scale foundation, but its full grandeur came to life in post by extending this environment with CG.(Images courtesy of HBO) The second season of The Lord of the Rings: The Rings of Power refined its environments, which elevate Middle-earth’s realism. (Image courtesy of Prime Video) Some of the series’ most striking visual moments were also its most understated. The shift of Gotham’s seasons – transforming sunlit summer shoots into autumn’s muted chill – helped shape the show’s somber tone, reinforcing the bleak, crime-ridden undercurrent. The city’s bridges and skyscrapers were meticulously augmented, stretching Gotham beyond the limits of practical sets while preserving its grounded, brutalist aesthetic. Even the scars and wounds on Sofia Falcone were enhanced through digital artistry, ensuring that her past traumas remained ever-present, etched into her skin. The series wasn’t without its large-scale effects – far from it. Han and his team orchestrated massive sequences of urban devastation. “The floodwaters were one of our biggest challenges,” Han notes, referring to the ongoing impact of the catastrophic deluge that left Gotham in ruins. One particularly harrowing sequence required simulating a tsunami tearing through the streets – not as an action set piece, but as a deeply personal moment of loss. “Telling Victor’s story of how he lost his entire family in the bombing and floods of Gotham was heartbreaking,” Han says. “Normally, you create an event like that for excitement, for tension. But for us, it was about capturing emotional devastation.” Perhaps the most technically intricate sequences were the shootouts, hallmarks of Gotham’s criminal underbelly. “We programmed millisecond-accurate synced flash guns to mimic dramatic gunfire light,” Han explains, ensuring that the interplay of practical and digital elements remained imperceptible. Every muzzle flash, every ricochet was meticulously planned and rendered. The ultimate achievement for Han and his team wasn’t crafting the biggest explosion or the most elaborate digital sequence – it was making Gotham itself feel inescapably real. He says, “Nothing was more important to us than for you to forget that there are 3,100 VFX shots in this series.” The challenge for The Residence was making one of the most recognizable buildings in the world feel both immersive and narratively engaging. (Photo: Erin Simkin. Courtesy of Netflix) Bringing the universe of Dune to life on TV for HBO’s Dune: Prophecy requires a delicate balance of realism and imagination, grounded in natural physics, yet awe-inspiring in scale. Dune: Prophecy looks to challenge traditional fantasy dominance with its stunning, desert-bound landscapes and intricate space-faring visuals, uniting the grandeur of Denis Villeneuve’s films with the demands of episodic storytelling. Set thousands of years before the events of the films, the series explores the early days of the Bene Gesserit, a secretive order wielding extraordinary abilities. Translating that power into a visual language required technical innovation. “Kudos to Important Looking Pirates for the space folding and [Lila’s] Agony work,” says VFX Supervisor Mike Enriquez. No Dune project would be complete without its most iconic inhabitant, the sandworm. VFX Producer Terron Pratt says. “We’re incredibly proud of what the team at Image Engine created. Precise animation conveyed this creature’s weight and massive scale, while incredibly detailed sand simulations integrated it into the environment.” Every grain of sand had to move believably in response to the worm’s colossal presence to ensure the physics of Arrakis remained authentic. Floodwaters play a significant part in the destruction of Gotham in The Penguin. One particularly harrowing sequence required simulating a tsunami tearing through the streets. (Image courtesy of HBO) American Primeval integrated visual effects with practical techniques in creative, unconventional ways. The massacre sequence showcases technical mastery and pulls the audience into the brutal reality of the American frontier. (Photo: Justin Lubin. Courtesy of Netflix) For the Zimia spaceport, an enormous hub of interstellar commerce, the Dune: Prophecy production team built a vast practical set to provide a strong scale foundation. However, its full grandeur came to life in post. “By extending this environment with CG, we amplified the scope of our world, making it feel expansive and deeply impactful,” Pratt explains. The result was a sprawling, futuristic cityscape that retained a tangible weight with impeccably amalgamated practical and digital elements. Wētā FX sought to advance digital world-building for Season 2 of The Lord of the Rings: The Rings of Power while staying true to J.R.R. Tolkien’s vision. (Image courtesy of Prime Video) Visual effects extended beyond character work for Lady in the Lake, playing a key role in the show’s immersive world-building. (Image courtesy of Apple TV+) For House of the Dragon VFX Supervisor Daði Einarsson, Season 2 presented some of the HBO show’s most complex and ambitious visual effects work. The Battle at Rook’s Rest in Episode 4 was a milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege,” Einarsson says. Capturing the actors’ performances mid-flight required a combination of motion-controlled cameras, preprogrammed motion bases with saddles and LED volume lighting – all mapped directly from fully animated previsualized sequences approved by director Alan Taylor and Showrunner Ryan J. Condal. On the ground, the battlefield required digital crowd replication, extensive environment extensions, and pyrotechnic enhancements to create a war zone that felt both vast and intimately chaotic. “In the air, we created a fully CG version of the environment to have full control over the camera work,” Einarsson explains. Under the supervision of Sven Martin, the Pixomondo team stitched together breathtaking aerial combat, ensuring the dragons moved with the weight and raw power befitting their legendary status. Blood, weapon effects and period-accurate muzzle flashes heightened the intensity of the brutal fight sequences in American Primeval. The natural elements and violence reflected the harsh realities of the American west in 1857. (Image courtesy of Netflix) The Residence brings a refined, detailed approach to environmental augmentation, using visual effects to take the audience on a journey through the White House in this political murder mystery. (Photo: Jessica Brooks. Courtesy of Netflix) Episode 7 introduced Hugh Hammer’s claim of Vermithor, Westeros’ second-largest dragon. Rather than breaking the sequence into multiple shots, Einarsson and director Loni Peristere saw an opportunity to craft something exceptional: a single, uninterrupted long take reminiscent of Children of Men and Gravity. “It took a lot of planning to design a series of beats that cohesively flowed from one into the next, with Hugh leading the camera by action and reaction,” Einarsson says. The sequence, which involved Hugh dodging Vermithor’s flames and ultimately claiming the beast through sheer bravery, was technically demanding. To achieve this, the team stitched together five separate takes of Hugh’s performance, shot over two separate days weeks apart, due to the set needing to be struck and rebuilt in different configurations. VFX Supervisor Wayne Stables and the team at Wētā ensured the transitions were imperceptible, uniting practical and digital elements into a continuous, immersive moment. “The Dragonmont Cavern environment was a beautiful, raised gantry and cave designed by [Production Designer] Jim Clay and expanded by Wētā,” Einarsson says. Then Rowley Imran’s stunt team and Mike Dawson’s SFX team engulfed the set in practical flames so every element, from fire to dust to movement, contributed to the illusion of real-time danger. For Einarsson, the most significant challenge wasn’t just in making these sequences visually spectacular – it was ensuring they belonged within the same world as the quiet, dialogue-driven moments in King’s Landing. “The aim is for incredibly complex and spectacular visual effects scenes to feel like they belong in the same world as two people talking in a council chamber,” he states. Every dragon, flame and gust of wind had to feel as lived-in as the politics playing out beneath them. Season 4 of The Boys delivered the fully CG octopus character, Ambrosius. A challenge was crafting a believable yet expressive sea creature and keeping it grounded while still embracing the show’s signature absurdity. (Image courtesy of Prime Video) In The Penguin, Gotham isn’t just a city; it’s a living, breathing entity shaped by destruction, decay and the quiet menace lurking beneath its streets. (Images courtesy of HBO) The Boys continues to defy genre norms, delivering audacious, technically complex effects that lean into its hyperviolent, satirical take on superheroes. For The Boys VFX Supervisor Stephan Fleet, Season 4 delivered some of the Amazon Prime show’s most dramatic effects yet, from the self-replicating Splinter to the fully CG octopus character, Ambrosius. Splinter, who has the ability to duplicate himself, presented a unique challenge. Fleet says, “His introduction on the podium was a complex motion control sequence. Eight hours of rehearsal, six hours of filming – for one shot.” Splinter’s design came with an added layer of difficulty. “We had to figure out how to make a nude male clone,” Fleet says. “Normally, you can hide doubles’ bodies in clothes – not this time!” The final effect required a mix of prosthetic cover-up pieces and VFX face replacement, requiring multiple iterations to make it work. Ambrosius became one of The Boys’ most unexpected breakout characters. “It’s fun making a full-on character in the show that’s an octopus,” Fleet reveals in a nod to the show’s absurd side. “As much as possible, we aim for a grounded approach and try to attain a level of thought and detail you don’t often find on TV.” While the battle for outstanding visual effects will likely be dominated by large-scale fantasy and sci-fi productions, several standout series are also making waves with their innovative and immersive visual storytelling. Netflix’s The Residence, led by VFX Supervisor Seth Hill, brings a refined, detailed approach to environmental augmentation, enhancing the grandeur of the White House setting in this political murder mystery. “Using visual effects to take the audience on a journey through an iconic location like the White House was really fun,” Hill says. “It’s a cool and unique use of visual effects.” One of the most ambitious sequences involved what the team called the Doll House, a digital rendering of the White House with its south façade removed, exposing the interior like a cross-section of a dollhouse. Hill explains. “Going back and forth from filmed footage to full CGI – that jump from grounded realism to abstract yet still real – was quite tricky,” he says, adding, “VFX is best when it is in service of the storytelling, and The Residence presented a unique opportunity to do just that. It was a big challenge and a tough nut to crack, but those creative and technical hurdles are a good part of what makes it so rewarding.” “We were tasked with pitting three dragons against each other in an all-out aerial war above a castle siege. In the air, we created a fully CG version of the environment to have full control over the camera work.”—Daði Einarsson, VFX Supervisor, House of the Dragon The Battle at Rook’s Rest in Episode 4 of House of the Dragon Season 2 was a major milestone for the series, marking the first full-scale dragon-on-dragon aerial battle. (Image courtesy of HBO) Season 2 of House of the Dragon presented some of the most complex and ambitious visual effects work for the show to date. (Photo: Theo Whiteman. Courtesy of HBO) For Jay Worth, VFX Supervisor on Apple TV+’s Lady in the Lake, the challenge was two-fold: create seamless effects and preserve the raw emotional truth of a performance. One of the most significant technical achievements was de-aging Natalie Portman. “It seems so easy on paper, but the reality was far more challenging,” Worth admits. Worth had tackled de-aging before, but never with the same level of success. “For me, it is simply because of her performance.” Portman delivered a nuanced, youthful portrayal that felt entirely authentic to the time period. “It made our job both so much easier and set the bar so high for us. Sometimes, you can hide in a scene like this – you pull the camera back, cut away before the most expressive parts of the dialogue, or the illusion breaks,” Worth explains. In Lady in the Lake, there was nowhere to hide. “I think that is what I am most proud of with these shots. It felt like the longer you stayed on them, the more you believed them. That is a real feat with this sort of work.” Skully VFX handled the de-aging. “They nailed the look early on and delivered throughout the project on this difficult task.” Working alongside Production Designer Jc Molina, the VFX team helped shape a world that felt rich, lived-in and historically precise. “We were entrusted with the most important part of this show – do we believe this performance from this character in this part of her journey? – and we feel like we were able to deliver on this challenge.” On the other end of the spectrum, Netflix’s American Primeval, under the guidance of VFX Supervisor Andrew Ceperley, delivers rugged, visceral realism in its portrayal of the untamed American frontier. With brutal battle sequences, sprawling landscapes and historical re-creations that interweave practical and digital effects, the series stands as a testament to how VFX can enhance grounded, historical storytelling. Ceperley says, “The standout is definitely the nearly three-minute single-shot massacre sequence in the forest episode.” Designed to immerse the audience in the raw, chaotic violence of the frontier, the scene captures every brutal detail with unrelenting intensity. The challenge was crafting invisible visual effects, enhancing practical stunts and destruction without breaking the immersive, handheld camera style. “The sequence was designed to be one shot made up of 10 individual takes, shot over seven days, seamlessly stitched together, all while using a handheld camera on an extremely wide-angle lens.” One of the most complex moments involved a bull smashing through a wagon while the characters hid underneath. Rather than relying on CGI, the team took a practical approach, placing a 360-degree camera under the wagon while the special effects team rigged it to explode in a way that simulated an impact. “A real bull was then guided to run toward the 360 camera and leap over it,” Ceperley says. The footage was blended with live-action shots of the actors with minimal CGI enhancements – just dust and debris – to complete the effect. Adding to the difficulty, the scene was set at sunset, giving the team an extremely limited window to capture each day’s footage. The massacre sequence was a prime example of integrating visual effects with practical techniques in creative, unconventional ways, blending old-school in-camera effects with modern stitching techniques to create a visceral cinematic moment that stayed true to the show’s raw, historical aesthetic. “Using old techniques in new, even strange ways and seeing it pay off and deliver on the original vision was the most rewarding part.”
    Like
    Love
    Wow
    Sad
    Angry
    149
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Netflix Tudum 2025: Everything Announced

    Netflix Tudum 2025 has begun and promises to reveal a ton of exciting details about the most-anticipated shows and movie heading to the streamer in the future, including Wake Up Dead Man: A Knives Out Mystery's release date and Squid Game's Season 3 trailer.There will be a ton of announcements during the Netflix Tudum 2025 livestream, and we'll be gathering all the big new right here as it happens, so make sure to stay tuned and refresh often!Wake Up Dead Man: A Knives Out Mystery Release Date RevealedRian Johnson's Wake Up Dead Man: A Knives Out Mystery's latest teaser trailer not only revealed more about Benoit Blanc's latest adventure, but it also shared it will arrive on Netflix on December 12, 2025.We don't know much about this new mystery yet, but Blanc himself has described this as his "most dangerous case yet." What we do know is that Daniel Craig's Blanc will be joined by Josh O'Connor, Glenn Close, Josh Brolin, Mila Kunis, Jeremy Renner, Kerry Washington, Andrew Scott, Cailee Spaeny, Daryl McCormack, and Thomas Haden Church.Squid Game Season 3 Trailer Teases the Final GamesSquid Game Season 3 is set to debut on Netflix on June 27, and Tudum shared with the world a new trailer that showcases what these final games have in in store for Lee Jung-jae's Gi-hun and more. “The new season will focus on what Gi-hun can and will do after all his efforts fail,” series creator Hwang Dong-hyuk said. "He is in utter despair after losing everything and watching all his efforts go in vain. The story then takes an interesting turn, questioning whether Gi-hun can overcome his shame and rise again to prove that values of humanity — like conscience and kindness — can exist in the arena.” Guillermo del Toro's Frankenstein Gets a Teaser Trailer That Shows Off Oscar Isaac's Victor Frankenstein and the 'Misbegotten Creature He's Created'Academy Award winner Guillermo del Toro's Frankenstein, which is an adaptation of Mary Shelley's iconic novel, got a new teaser trailer that shows off Oscar Isaac's Victor Frankenstein and the "misbegotten creaturehe's created." Alongside a glimpse at these film that will be released in November, fans of del Toro's work will note "plenty of familiar imagery in the new teaser, from Isaac’s Victor standing on a decaying staircase holding a candelabrato a blood-red angelic figure surrounded in flames. One Piece Season 2 Trailer Reveals the First Look at Tony Tony ChopperThe latest trailer for Season 2 of One Piece has arrived and it has given us our first look at Tony Tony Chopper, who is voiced by Mikaela Hoover. For those unfamiliar, Chopper is a blue-nosed reindeer-boy hybrid and is able to treat various illnesses and wants to travel the world and cure all the diseases that pop up. “What excited me about playing Chopper is the tug of war between his standoffishness and his huge heart,” Hoover told Tudum. “He tries so hard to hide his emotions and put on a tough exterior, but underneath, he’s a big softy, and his love can’t help but come out.“I believe there is a little Chopper in all of us,” she adds. “We all want to be loved and accepted. We go to great lengths to keep the people that we love safe. There’s a purity to his nature that reminds us of what’s good in the world.”Developing...
    #netflix #tudum #everything #announced
    Netflix Tudum 2025: Everything Announced
    Netflix Tudum 2025 has begun and promises to reveal a ton of exciting details about the most-anticipated shows and movie heading to the streamer in the future, including Wake Up Dead Man: A Knives Out Mystery's release date and Squid Game's Season 3 trailer.There will be a ton of announcements during the Netflix Tudum 2025 livestream, and we'll be gathering all the big new right here as it happens, so make sure to stay tuned and refresh often!Wake Up Dead Man: A Knives Out Mystery Release Date RevealedRian Johnson's Wake Up Dead Man: A Knives Out Mystery's latest teaser trailer not only revealed more about Benoit Blanc's latest adventure, but it also shared it will arrive on Netflix on December 12, 2025.We don't know much about this new mystery yet, but Blanc himself has described this as his "most dangerous case yet." What we do know is that Daniel Craig's Blanc will be joined by Josh O'Connor, Glenn Close, Josh Brolin, Mila Kunis, Jeremy Renner, Kerry Washington, Andrew Scott, Cailee Spaeny, Daryl McCormack, and Thomas Haden Church.Squid Game Season 3 Trailer Teases the Final GamesSquid Game Season 3 is set to debut on Netflix on June 27, and Tudum shared with the world a new trailer that showcases what these final games have in in store for Lee Jung-jae's Gi-hun and more. “The new season will focus on what Gi-hun can and will do after all his efforts fail,” series creator Hwang Dong-hyuk said. "He is in utter despair after losing everything and watching all his efforts go in vain. The story then takes an interesting turn, questioning whether Gi-hun can overcome his shame and rise again to prove that values of humanity — like conscience and kindness — can exist in the arena.” Guillermo del Toro's Frankenstein Gets a Teaser Trailer That Shows Off Oscar Isaac's Victor Frankenstein and the 'Misbegotten Creature He's Created'Academy Award winner Guillermo del Toro's Frankenstein, which is an adaptation of Mary Shelley's iconic novel, got a new teaser trailer that shows off Oscar Isaac's Victor Frankenstein and the "misbegotten creaturehe's created." Alongside a glimpse at these film that will be released in November, fans of del Toro's work will note "plenty of familiar imagery in the new teaser, from Isaac’s Victor standing on a decaying staircase holding a candelabrato a blood-red angelic figure surrounded in flames. One Piece Season 2 Trailer Reveals the First Look at Tony Tony ChopperThe latest trailer for Season 2 of One Piece has arrived and it has given us our first look at Tony Tony Chopper, who is voiced by Mikaela Hoover. For those unfamiliar, Chopper is a blue-nosed reindeer-boy hybrid and is able to treat various illnesses and wants to travel the world and cure all the diseases that pop up. “What excited me about playing Chopper is the tug of war between his standoffishness and his huge heart,” Hoover told Tudum. “He tries so hard to hide his emotions and put on a tough exterior, but underneath, he’s a big softy, and his love can’t help but come out.“I believe there is a little Chopper in all of us,” she adds. “We all want to be loved and accepted. We go to great lengths to keep the people that we love safe. There’s a purity to his nature that reminds us of what’s good in the world.”Developing... #netflix #tudum #everything #announced
    WWW.IGN.COM
    Netflix Tudum 2025: Everything Announced
    Netflix Tudum 2025 has begun and promises to reveal a ton of exciting details about the most-anticipated shows and movie heading to the streamer in the future, including Wake Up Dead Man: A Knives Out Mystery's release date and Squid Game's Season 3 trailer.There will be a ton of announcements during the Netflix Tudum 2025 livestream, and we'll be gathering all the big new right here as it happens, so make sure to stay tuned and refresh often!Wake Up Dead Man: A Knives Out Mystery Release Date RevealedRian Johnson's Wake Up Dead Man: A Knives Out Mystery's latest teaser trailer not only revealed more about Benoit Blanc's latest adventure, but it also shared it will arrive on Netflix on December 12, 2025.We don't know much about this new mystery yet, but Blanc himself has described this as his "most dangerous case yet." What we do know is that Daniel Craig's Blanc will be joined by Josh O'Connor, Glenn Close, Josh Brolin, Mila Kunis, Jeremy Renner, Kerry Washington, Andrew Scott, Cailee Spaeny, Daryl McCormack, and Thomas Haden Church.Squid Game Season 3 Trailer Teases the Final GamesSquid Game Season 3 is set to debut on Netflix on June 27, and Tudum shared with the world a new trailer that showcases what these final games have in in store for Lee Jung-jae's Gi-hun and more. “The new season will focus on what Gi-hun can and will do after all his efforts fail,” series creator Hwang Dong-hyuk said. "He is in utter despair after losing everything and watching all his efforts go in vain. The story then takes an interesting turn, questioning whether Gi-hun can overcome his shame and rise again to prove that values of humanity — like conscience and kindness — can exist in the arena.” Guillermo del Toro's Frankenstein Gets a Teaser Trailer That Shows Off Oscar Isaac's Victor Frankenstein and the 'Misbegotten Creature He's Created'Academy Award winner Guillermo del Toro's Frankenstein, which is an adaptation of Mary Shelley's iconic novel, got a new teaser trailer that shows off Oscar Isaac's Victor Frankenstein and the "misbegotten creature (Jacob Elordi) he's created." Alongside a glimpse at these film that will be released in November, fans of del Toro's work will note "plenty of familiar imagery in the new teaser, from Isaac’s Victor standing on a decaying staircase holding a candelabra (see: Crimson Peak) to a blood-red angelic figure surrounded in flames (see: the Angel of Death in Hellboy II: The Golden Army, the blue Wood Sprite and the sphinxlike Death in Pinocchio, and even the Faun in Pan’s Labyrinth). One Piece Season 2 Trailer Reveals the First Look at Tony Tony ChopperThe latest trailer for Season 2 of One Piece has arrived and it has given us our first look at Tony Tony Chopper, who is voiced by Mikaela Hoover (Beef, Guardians of the Galaxy Vol. 3, and Superman). For those unfamiliar, Chopper is a blue-nosed reindeer-boy hybrid and is able to treat various illnesses and wants to travel the world and cure all the diseases that pop up. “What excited me about playing Chopper is the tug of war between his standoffishness and his huge heart,” Hoover told Tudum. “He tries so hard to hide his emotions and put on a tough exterior, but underneath, he’s a big softy, and his love can’t help but come out.“I believe there is a little Chopper in all of us,” she adds. “We all want to be loved and accepted. We go to great lengths to keep the people that we love safe. There’s a purity to his nature that reminds us of what’s good in the world.”Developing...
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Apple catches its breath as US court rejects tariff tax

    Apple — and almost everybody else — has gotten a slight reprieve as a US court yesterday set aside the Trump tariff tax. But conflict and confusion continue to batter global trade, and while the news will provide a glimmer of relief, it will probably be short-lived. There’s always another dead cat to throw into the flames.

    Three judges from the US Court of International Trade found that the US International Emergency Economic Powers Act, which the Trump administration invoked to justify the imposition of these tariffs, does not give the president the authority to levy these taxes on trade. “The court does not read IEEPA to confer such unbounded authority and sets aside the challenged tariffs imposed thereunder,” they wrote.

    The judgement does not impact the 25% “trafficking tariffs” imposed on Mexican and Canadian products and does not prevent the 20% trafficking tariff in place on Chinese goods. It does, however, end the “worldwide and retaliatory” 10-50% tariffs the administration threw at 57 countries.

    A coalition of small businesses took the case to court, arguing that only Congress has the authority to levy tariffs under the law used by the president’s office. They seem to have prevailed in the argument — at least, so far. It is interesting to note that the administration wanted all the tariff-related lawsuits moved to this particular court, as it felt it would receptive to the administration’s arguments. 

    This turned out to be an error.

    What is an emergency?

    Responding, a White House statement from spokesperson Kush Desai maintained the need for these tariffs, calling US trade deficits a “national emergency that has decimated American communities, left our workers behind and weakened our defense industrial base — facts that the court did not dispute.” 

    But can a trade in cheap consumer goods be seen as an unusual threat after it has been part of US culture for decades? Not according to the US Court of International Trade. The judges say the trade deficit does not meet the Nixon-era International Emergency Economic Powers Act requirement that an emergency can only be triggered by an “unusual and extraordinary threat.” 

    The journey is by no means over, of course. With the president recently threatening additional tariffs on iPhones made in India, the reprieve may be brief. 

    Desai’s statement said “unelected judges” are not the right people to decide how to handle what he calls a national emergency. “The administration is committed to using every lever of executive power to address this crisis and restore American greatness.” 

    It seems likely to end at the Supreme Court, even while the administration argues that it should not be bound by the checks and balances that still remain under the US Constitution. For now, an appeal has been lodged with the United States Court of Appeals for the Federal Circuit in Washington. 

    Where is the off-ramp?

    Apple, the world’s biggest consumer electronics company, which contributes a fortune to the US treasury and employs tens of thousands of Americans, will likely be relieved the tariffs have been set aside. 

    The reprieve implies that US consumers won’t need to pay more for their iPhones for a little longer yet and better reflects the reality that even if Apple were to shift iPhone manufacturing to the US, doing so would take years, cost billions, require engineering skills in quantities that do not yet exist in the US, would involve automation rather than large numbers of new jobs, and would be hampered by the availability of components and materials. 

    For the time being, at least, the judgment is a significant obstacle to the tariff taxes, albeit one that casts another spanner in the works for ongoing international trade talks. However, there is still scope for the administration to impose sector-specific taxes.

    All the same, “Tim Apple” will be acutely aware that the future will not look like the past, and the company’s billion investment in the US will be part of the company’s future approach to manufacturing and trade.

    It suggests that while moving iPhone manufacturing to the US may be impractical, moving manufacture of some components and hardware may make sense. It is possible that as Apple and the US administration continue to negotiate, they may yet identify a road that enables both to declare some form of victory.

    You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.
    #apple #catches #its #breath #court
    Apple catches its breath as US court rejects tariff tax
    Apple — and almost everybody else — has gotten a slight reprieve as a US court yesterday set aside the Trump tariff tax. But conflict and confusion continue to batter global trade, and while the news will provide a glimmer of relief, it will probably be short-lived. There’s always another dead cat to throw into the flames. Three judges from the US Court of International Trade found that the US International Emergency Economic Powers Act, which the Trump administration invoked to justify the imposition of these tariffs, does not give the president the authority to levy these taxes on trade. “The court does not read IEEPA to confer such unbounded authority and sets aside the challenged tariffs imposed thereunder,” they wrote. The judgement does not impact the 25% “trafficking tariffs” imposed on Mexican and Canadian products and does not prevent the 20% trafficking tariff in place on Chinese goods. It does, however, end the “worldwide and retaliatory” 10-50% tariffs the administration threw at 57 countries. A coalition of small businesses took the case to court, arguing that only Congress has the authority to levy tariffs under the law used by the president’s office. They seem to have prevailed in the argument — at least, so far. It is interesting to note that the administration wanted all the tariff-related lawsuits moved to this particular court, as it felt it would receptive to the administration’s arguments.  This turned out to be an error. What is an emergency? Responding, a White House statement from spokesperson Kush Desai maintained the need for these tariffs, calling US trade deficits a “national emergency that has decimated American communities, left our workers behind and weakened our defense industrial base — facts that the court did not dispute.”  But can a trade in cheap consumer goods be seen as an unusual threat after it has been part of US culture for decades? Not according to the US Court of International Trade. The judges say the trade deficit does not meet the Nixon-era International Emergency Economic Powers Act requirement that an emergency can only be triggered by an “unusual and extraordinary threat.”  The journey is by no means over, of course. With the president recently threatening additional tariffs on iPhones made in India, the reprieve may be brief.  Desai’s statement said “unelected judges” are not the right people to decide how to handle what he calls a national emergency. “The administration is committed to using every lever of executive power to address this crisis and restore American greatness.”  It seems likely to end at the Supreme Court, even while the administration argues that it should not be bound by the checks and balances that still remain under the US Constitution. For now, an appeal has been lodged with the United States Court of Appeals for the Federal Circuit in Washington.  Where is the off-ramp? Apple, the world’s biggest consumer electronics company, which contributes a fortune to the US treasury and employs tens of thousands of Americans, will likely be relieved the tariffs have been set aside.  The reprieve implies that US consumers won’t need to pay more for their iPhones for a little longer yet and better reflects the reality that even if Apple were to shift iPhone manufacturing to the US, doing so would take years, cost billions, require engineering skills in quantities that do not yet exist in the US, would involve automation rather than large numbers of new jobs, and would be hampered by the availability of components and materials.  For the time being, at least, the judgment is a significant obstacle to the tariff taxes, albeit one that casts another spanner in the works for ongoing international trade talks. However, there is still scope for the administration to impose sector-specific taxes. All the same, “Tim Apple” will be acutely aware that the future will not look like the past, and the company’s billion investment in the US will be part of the company’s future approach to manufacturing and trade. It suggests that while moving iPhone manufacturing to the US may be impractical, moving manufacture of some components and hardware may make sense. It is possible that as Apple and the US administration continue to negotiate, they may yet identify a road that enables both to declare some form of victory. You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon. #apple #catches #its #breath #court
    WWW.COMPUTERWORLD.COM
    Apple catches its breath as US court rejects tariff tax
    Apple — and almost everybody else — has gotten a slight reprieve as a US court yesterday set aside the Trump tariff tax. But conflict and confusion continue to batter global trade, and while the news will provide a glimmer of relief, it will probably be short-lived. There’s always another dead cat to throw into the flames. Three judges from the US Court of International Trade found that the US International Emergency Economic Powers Act, which the Trump administration invoked to justify the imposition of these tariffs, does not give the president the authority to levy these taxes on trade. “The court does not read IEEPA to confer such unbounded authority and sets aside the challenged tariffs imposed thereunder,” they wrote. The judgement does not impact the 25% “trafficking tariffs” imposed on Mexican and Canadian products and does not prevent the 20% trafficking tariff in place on Chinese goods. It does, however, end the “worldwide and retaliatory” 10-50% tariffs the administration threw at 57 countries. A coalition of small businesses took the case to court, arguing that only Congress has the authority to levy tariffs under the law used by the president’s office. They seem to have prevailed in the argument — at least, so far. It is interesting to note that the administration wanted all the tariff-related lawsuits moved to this particular court, as it felt it would receptive to the administration’s arguments.  This turned out to be an error. What is an emergency? Responding, a White House statement from spokesperson Kush Desai maintained the need for these tariffs, calling US trade deficits a “national emergency that has decimated American communities, left our workers behind and weakened our defense industrial base — facts that the court did not dispute.”  But can a trade in cheap consumer goods be seen as an unusual threat after it has been part of US culture for decades? Not according to the US Court of International Trade. The judges say the trade deficit does not meet the Nixon-era International Emergency Economic Powers Act requirement that an emergency can only be triggered by an “unusual and extraordinary threat.”  The journey is by no means over, of course. With the president recently threatening additional tariffs on iPhones made in India (“I have a bit of a problem with my friend, Tim Cook”), the reprieve may be brief.  Desai’s statement said “unelected judges” are not the right people to decide how to handle what he calls a national emergency. “The administration is committed to using every lever of executive power to address this crisis and restore American greatness.”  It seems likely to end at the Supreme Court, even while the administration argues that it should not be bound by the checks and balances that still remain under the US Constitution. For now, an appeal has been lodged with the United States Court of Appeals for the Federal Circuit in Washington.  Where is the off-ramp? Apple, the world’s biggest consumer electronics company, which contributes a fortune to the US treasury and employs tens of thousands of Americans, will likely be relieved the tariffs have been set aside.  The reprieve implies that US consumers won’t need to pay more for their iPhones for a little longer yet and better reflects the reality that even if Apple were to shift iPhone manufacturing to the US, doing so would take years, cost billions, require engineering skills in quantities that do not yet exist in the US, would involve automation rather than large numbers of new jobs, and would be hampered by the availability of components and materials.  For the time being, at least, the judgment is a significant obstacle to the tariff taxes, albeit one that casts another spanner in the works for ongoing international trade talks. However, there is still scope for the administration to impose sector-specific taxes. All the same, “Tim Apple” will be acutely aware that the future will not look like the past, and the company’s $500 billion investment in the US will be part of the company’s future approach to manufacturing and trade. It suggests that while moving iPhone manufacturing to the US may be impractical, moving manufacture of some components and hardware may make sense. It is possible that as Apple and the US administration continue to negotiate, they may yet identify a road that enables both to declare some form of victory. You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Elden Ring Nightreign is hard for completely different reasons than Elden Ring

    OK, I know you just read that headline, but let me admit first off that I don’t actually think Elden Ring is that hard — not if you take it slow and steady, which the game’s design not only allows but encourages. At every step of the way in Elden Ring, you can decide exactly how you want to play it. It’s very customizable and it rewards patience. Elden Ring Nightreign is the complete opposite, and that’s why I don’t think it’s for me. And it might not be for most other FromSoftware game fans, either, which is pretty shocking.It wouldn’t be a FromSoftware game launch without at least a few debates about difficulty and certain players hurling “git gud” at each other like it was ever even remotely cool to say that and not just performatively tryhard at best and antisocial at worst. I try to exist instead in the sector of the FromSoftware fandom that is prosocial rather than antisocial — think Let Me Solo Her, for example, or even consider the real-life story that inspired longtime FromSoftware game director Hidetaka Miyazaki to design Demon’s Souls’ multiplayer elements with prosocial thinking in mind:“The origin of that idea is actually due to a personal experience where a car suddenly stopped on a hillside after some heavy snow and started to slip,” says Miyazaki. “The car following me also got stuck, and then the one behind it spontaneously bumped into it and started pushing it up the hill... That’s it! That’s how everyone can get home! Then it was my turn and everyone started pushing my car up the hill, and I managed to get home safely.”“But I couldn’t stop the car to say thanks to the people who gave me a shove. I’d have just got stuck again if I’d stopped. On the way back home I wondered whether the last person in the line had made it home, and thought that I would probably never meet the people who had helped me. I thought that maybe if we’d met in another place we’d become friends, or maybe we’d just fight...””You could probably call it a connection of mutual assistance between transient people. Oddly, that incident will probably linger in my heart for a long time.”The multiplayer experiences that I’ve had in Dark Souls and Elden Ring definitely do linger in my heart. I’ve also absolutely loved the moments in FromSoftware games in which I’ve personally conquered a difficult section all by myself. But I look back with equal appreciation on the times when I summoned a complete stranger to help me with something — “a connection of mutual assistance between transient people,” as Miyazaki put it. It is how these games are meant to be played, not as brutal solo journeys but as shared experiences.Here’s a screenshot I took of my Elden Ring character at the beginning of the game, before I knew I was going to spend 360 hours playing it Image: FromSoftware via PolygonThis brings us back to Elden Ring Nightreign, a game not directed by Miyazaki but by Junya Ishizaki. The difference in its multiplayer ethos is stark. This is a game designed with three-player squads in mind; it’s currently very punishing for solo players, and the designers are still working on a duos mode. Because it’s three-player by default, I assumed that the game would be designed around teamwork and would actively reward prosocial behaviors, like base Elden Ring. I would argue that it’s not, and that’s why it’s very hard to have a good time in the game — especially if you’re playing with complete strangers.Problem number one: There’s no in-game communication system besides pinging certain locations on the map. Lack of chat options is a FromSoftware classic, and in most of these games, you don’t really need communication to understand what to do. Usually, you’re just summoned to help with a boss battle, and after it’s over, you’re done and you go back to your game. But in Nightreign, it’s three-player for the entire game, obviously, and it’s a match-based game, not a hundreds-of-hours RPG. Matches last 45 minutes and every second counts, which means you and your teammates need to be extremely organized throughout. The lack of communication hurts. But that’s not the only problem. Far from it.Problem number two: The ring of fire. This game is a combination of Elden Ring’s open world areasand a Fortnite-esque ring of fire that closes in on you constantly. There’s also a Diablo-esque loot system, but you better read those loot descriptions fast, because the fire is coming for you. There are randomized boss fights all over the map, but oops, you might not be able to complete them in time to collect runes from them, because that fire is closing in. There are also special upgrades that you can only get if you defeat these mid-game bosses all over the map, but you might barely even have time to read those descriptions of the special abilities and select one in time for… you guessed it… the fire rushing towards you.This second problem becomes even more stressful when you have two other people on your team alongside you. This game has not one but two different sprint buttons in it — a regular sprint, and a super-fast sprint that uses up stamina faster. That’s because, of course, you need to be running from that fire. But that means your teammates, and you, need to constantly be doing the equivalent of screaming “move, move, move” like a drill sergeant in an army movie. You will be unwittingly getting annoyed at your teammate who is spending too damn long looking at loot on the ground or at an upgrade tree. The fire is coming! Hurry the fuck up! Again, this is not a game design choice that rewards prosocial behaviors and instead makes you feel dragged down by the two teammates that you also desperately need to survive the bosses in this game. Even the “revive” process involves you inflicting damage on your teammate to bring them back to life, which is darkly hilarious, because you might also grow to desire hitting them due to how annoyed you might feel that they died during a super difficult fight. Which brings us to the third and final problem.Image: FromSoftwareThird problem: The randomization of the bosses and of the items. The thing about base Elden Ring is that you can figure out a boss and how it worksand then patiently build up a character who can deal with that problem. You can memorize that boss’ attack patterns. You can find a save point nearest to that boss and run it back over and over again until you get past it. These are all of the wonderful and rewarding parts of playing FromSoftware video games; these are also the moments when you might do all of those preparations and then think, “Actually, I want to also summon a complete stranger to help me with this boss because it’s still too freaking hard.” And then you can do that, too. None of that is the case in Nightreign, because everything is completely fucking random.The bosses, except for the very last boss in each area, are random. The loot is random. Do you have the right loot to fight the boss you’re facing right this second? You may very well not. Do your teammates have it? You might not even know; you don’t have a way to communicate with them, after all. Is the boss in this area way overleveled for you and your team? It won’t be obvious until you start hitting it, and once you do that, good luck escaping. And if your team does a complete wipe and everyone dies to that boss together, you don’t get to run back together from the nearest save point, having seen its attack patterns, ready to try again with teamwork in mind. Nope, instead you get to start all over again, except now with new randomized bosses and new randomized loot.In other games with randomized loot, like Diablo, or other roguelikes with random elements like Hades, the game is designed with down time in mind. When you’ve completed a fight in Diablo or Hades, you have infinite time to stand around and make decisions. There is no encroaching circle of fire forcing you to read item descriptions and ability trees quickly. There’s a reason for that; the decision-making is the most fun part of a game with randomized elements. Why would Nightreign take that away?All of these aspects of the game do feel less bad if you’re playing with two good friends on voice chat. But even in that scenario, the game is still really punishing, and again, not in a way that other FromSoftware games are punishing. It’s punishing because you need to spend the entire game running, looking at randomized loot as fast as you possibly can before making a snapdecision, running more, desperately encouraging your teammates to keep on running to keep up, warning your teammates about the encroaching flames about to kill them, and did I mention running? Is this a fun way to spend your weekly gamer night with two other adults who just worked a full-time job all day and maybe just wanted to have a nice time playing a video game together?Image: FromSoftware/Bandai NamcoI’ve had a review code for Nightreign for a while now, so I already was worried about these problems before the game launched, but now that it’s launched and I’m seeing early mixed reviews on Steam, I’m ready to commiserate and validate: Yes, this game really doesn’t feel like Elden Ring, and even after some of this stuff gets patched, it’s still fundamentally super different. And that’s not only because it’s multiplayer, but because the multiplayer just doesn’t feel like other multiplayer FromSoftware experiences. It feels like it’s designed not only for people who have two best friends with whom they play competitive games on a regular basis, but also specifically for people who live for thrills and speed — not the methodical, calculated experiences of other FromSoftware games.For all of those reasons, I’m really not sure how this is going to go for FromSoftware over time. Is this game going to eventually encourage some prosocial behaviors amongst players, against all odds? Will people slowly learn the best ways to get through different areas? Will there be a “meta” for working together that emerges over time?It seems possible, and since it’s only been one day, it’s way too early to tell. Various social norms will emerge in the player community, and hopefully they won’t be toxic ones. But I can tell from having already played the game that this is going to be an uphill climb for FromSoftware fans. It’s a very different game — and its specific form of difficulty is going to be a whole new variety for those fans to get used to. And like me, they might just decide they don’t really care for it.See More:
    #elden #ring #nightreign #hard #completely
    Elden Ring Nightreign is hard for completely different reasons than Elden Ring
    OK, I know you just read that headline, but let me admit first off that I don’t actually think Elden Ring is that hard — not if you take it slow and steady, which the game’s design not only allows but encourages. At every step of the way in Elden Ring, you can decide exactly how you want to play it. It’s very customizable and it rewards patience. Elden Ring Nightreign is the complete opposite, and that’s why I don’t think it’s for me. And it might not be for most other FromSoftware game fans, either, which is pretty shocking.It wouldn’t be a FromSoftware game launch without at least a few debates about difficulty and certain players hurling “git gud” at each other like it was ever even remotely cool to say that and not just performatively tryhard at best and antisocial at worst. I try to exist instead in the sector of the FromSoftware fandom that is prosocial rather than antisocial — think Let Me Solo Her, for example, or even consider the real-life story that inspired longtime FromSoftware game director Hidetaka Miyazaki to design Demon’s Souls’ multiplayer elements with prosocial thinking in mind:“The origin of that idea is actually due to a personal experience where a car suddenly stopped on a hillside after some heavy snow and started to slip,” says Miyazaki. “The car following me also got stuck, and then the one behind it spontaneously bumped into it and started pushing it up the hill... That’s it! That’s how everyone can get home! Then it was my turn and everyone started pushing my car up the hill, and I managed to get home safely.”“But I couldn’t stop the car to say thanks to the people who gave me a shove. I’d have just got stuck again if I’d stopped. On the way back home I wondered whether the last person in the line had made it home, and thought that I would probably never meet the people who had helped me. I thought that maybe if we’d met in another place we’d become friends, or maybe we’d just fight...””You could probably call it a connection of mutual assistance between transient people. Oddly, that incident will probably linger in my heart for a long time.”The multiplayer experiences that I’ve had in Dark Souls and Elden Ring definitely do linger in my heart. I’ve also absolutely loved the moments in FromSoftware games in which I’ve personally conquered a difficult section all by myself. But I look back with equal appreciation on the times when I summoned a complete stranger to help me with something — “a connection of mutual assistance between transient people,” as Miyazaki put it. It is how these games are meant to be played, not as brutal solo journeys but as shared experiences.Here’s a screenshot I took of my Elden Ring character at the beginning of the game, before I knew I was going to spend 360 hours playing it Image: FromSoftware via PolygonThis brings us back to Elden Ring Nightreign, a game not directed by Miyazaki but by Junya Ishizaki. The difference in its multiplayer ethos is stark. This is a game designed with three-player squads in mind; it’s currently very punishing for solo players, and the designers are still working on a duos mode. Because it’s three-player by default, I assumed that the game would be designed around teamwork and would actively reward prosocial behaviors, like base Elden Ring. I would argue that it’s not, and that’s why it’s very hard to have a good time in the game — especially if you’re playing with complete strangers.Problem number one: There’s no in-game communication system besides pinging certain locations on the map. Lack of chat options is a FromSoftware classic, and in most of these games, you don’t really need communication to understand what to do. Usually, you’re just summoned to help with a boss battle, and after it’s over, you’re done and you go back to your game. But in Nightreign, it’s three-player for the entire game, obviously, and it’s a match-based game, not a hundreds-of-hours RPG. Matches last 45 minutes and every second counts, which means you and your teammates need to be extremely organized throughout. The lack of communication hurts. But that’s not the only problem. Far from it.Problem number two: The ring of fire. This game is a combination of Elden Ring’s open world areasand a Fortnite-esque ring of fire that closes in on you constantly. There’s also a Diablo-esque loot system, but you better read those loot descriptions fast, because the fire is coming for you. There are randomized boss fights all over the map, but oops, you might not be able to complete them in time to collect runes from them, because that fire is closing in. There are also special upgrades that you can only get if you defeat these mid-game bosses all over the map, but you might barely even have time to read those descriptions of the special abilities and select one in time for… you guessed it… the fire rushing towards you.This second problem becomes even more stressful when you have two other people on your team alongside you. This game has not one but two different sprint buttons in it — a regular sprint, and a super-fast sprint that uses up stamina faster. That’s because, of course, you need to be running from that fire. But that means your teammates, and you, need to constantly be doing the equivalent of screaming “move, move, move” like a drill sergeant in an army movie. You will be unwittingly getting annoyed at your teammate who is spending too damn long looking at loot on the ground or at an upgrade tree. The fire is coming! Hurry the fuck up! Again, this is not a game design choice that rewards prosocial behaviors and instead makes you feel dragged down by the two teammates that you also desperately need to survive the bosses in this game. Even the “revive” process involves you inflicting damage on your teammate to bring them back to life, which is darkly hilarious, because you might also grow to desire hitting them due to how annoyed you might feel that they died during a super difficult fight. Which brings us to the third and final problem.Image: FromSoftwareThird problem: The randomization of the bosses and of the items. The thing about base Elden Ring is that you can figure out a boss and how it worksand then patiently build up a character who can deal with that problem. You can memorize that boss’ attack patterns. You can find a save point nearest to that boss and run it back over and over again until you get past it. These are all of the wonderful and rewarding parts of playing FromSoftware video games; these are also the moments when you might do all of those preparations and then think, “Actually, I want to also summon a complete stranger to help me with this boss because it’s still too freaking hard.” And then you can do that, too. None of that is the case in Nightreign, because everything is completely fucking random.The bosses, except for the very last boss in each area, are random. The loot is random. Do you have the right loot to fight the boss you’re facing right this second? You may very well not. Do your teammates have it? You might not even know; you don’t have a way to communicate with them, after all. Is the boss in this area way overleveled for you and your team? It won’t be obvious until you start hitting it, and once you do that, good luck escaping. And if your team does a complete wipe and everyone dies to that boss together, you don’t get to run back together from the nearest save point, having seen its attack patterns, ready to try again with teamwork in mind. Nope, instead you get to start all over again, except now with new randomized bosses and new randomized loot.In other games with randomized loot, like Diablo, or other roguelikes with random elements like Hades, the game is designed with down time in mind. When you’ve completed a fight in Diablo or Hades, you have infinite time to stand around and make decisions. There is no encroaching circle of fire forcing you to read item descriptions and ability trees quickly. There’s a reason for that; the decision-making is the most fun part of a game with randomized elements. Why would Nightreign take that away?All of these aspects of the game do feel less bad if you’re playing with two good friends on voice chat. But even in that scenario, the game is still really punishing, and again, not in a way that other FromSoftware games are punishing. It’s punishing because you need to spend the entire game running, looking at randomized loot as fast as you possibly can before making a snapdecision, running more, desperately encouraging your teammates to keep on running to keep up, warning your teammates about the encroaching flames about to kill them, and did I mention running? Is this a fun way to spend your weekly gamer night with two other adults who just worked a full-time job all day and maybe just wanted to have a nice time playing a video game together?Image: FromSoftware/Bandai NamcoI’ve had a review code for Nightreign for a while now, so I already was worried about these problems before the game launched, but now that it’s launched and I’m seeing early mixed reviews on Steam, I’m ready to commiserate and validate: Yes, this game really doesn’t feel like Elden Ring, and even after some of this stuff gets patched, it’s still fundamentally super different. And that’s not only because it’s multiplayer, but because the multiplayer just doesn’t feel like other multiplayer FromSoftware experiences. It feels like it’s designed not only for people who have two best friends with whom they play competitive games on a regular basis, but also specifically for people who live for thrills and speed — not the methodical, calculated experiences of other FromSoftware games.For all of those reasons, I’m really not sure how this is going to go for FromSoftware over time. Is this game going to eventually encourage some prosocial behaviors amongst players, against all odds? Will people slowly learn the best ways to get through different areas? Will there be a “meta” for working together that emerges over time?It seems possible, and since it’s only been one day, it’s way too early to tell. Various social norms will emerge in the player community, and hopefully they won’t be toxic ones. But I can tell from having already played the game that this is going to be an uphill climb for FromSoftware fans. It’s a very different game — and its specific form of difficulty is going to be a whole new variety for those fans to get used to. And like me, they might just decide they don’t really care for it.See More: #elden #ring #nightreign #hard #completely
    WWW.POLYGON.COM
    Elden Ring Nightreign is hard for completely different reasons than Elden Ring
    OK, I know you just read that headline, but let me admit first off that I don’t actually think Elden Ring is that hard — not if you take it slow and steady, which the game’s design not only allows but encourages. At every step of the way in Elden Ring, you can decide exactly how you want to play it. It’s very customizable and it rewards patience. Elden Ring Nightreign is the complete opposite, and that’s why I don’t think it’s for me. And it might not be for most other FromSoftware game fans, either, which is pretty shocking.It wouldn’t be a FromSoftware game launch without at least a few debates about difficulty and certain players hurling “git gud” at each other like it was ever even remotely cool to say that and not just performatively tryhard at best and antisocial at worst. I try to exist instead in the sector of the FromSoftware fandom that is prosocial rather than antisocial — think Let Me Solo Her, for example, or even consider the real-life story that inspired longtime FromSoftware game director Hidetaka Miyazaki to design Demon’s Souls’ multiplayer elements with prosocial thinking in mind (via an old 2010 Eurogamer interview):“The origin of that idea is actually due to a personal experience where a car suddenly stopped on a hillside after some heavy snow and started to slip,” says Miyazaki. “The car following me also got stuck, and then the one behind it spontaneously bumped into it and started pushing it up the hill... That’s it! That’s how everyone can get home! Then it was my turn and everyone started pushing my car up the hill, and I managed to get home safely.”“But I couldn’t stop the car to say thanks to the people who gave me a shove. I’d have just got stuck again if I’d stopped. On the way back home I wondered whether the last person in the line had made it home, and thought that I would probably never meet the people who had helped me. I thought that maybe if we’d met in another place we’d become friends, or maybe we’d just fight...””You could probably call it a connection of mutual assistance between transient people. Oddly, that incident will probably linger in my heart for a long time.”The multiplayer experiences that I’ve had in Dark Souls and Elden Ring definitely do linger in my heart. I’ve also absolutely loved the moments in FromSoftware games in which I’ve personally conquered a difficult section all by myself. But I look back with equal appreciation on the times when I summoned a complete stranger to help me with something — “a connection of mutual assistance between transient people,” as Miyazaki put it. It is how these games are meant to be played, not as brutal solo journeys but as shared experiences.Here’s a screenshot I took of my Elden Ring character at the beginning of the game, before I knew I was going to spend 360 hours playing it Image: FromSoftware via PolygonThis brings us back to Elden Ring Nightreign, a game not directed by Miyazaki but by Junya Ishizaki. The difference in its multiplayer ethos is stark. This is a game designed with three-player squads in mind; it’s currently very punishing for solo players (although an upcoming patch aims to fix some of that), and the designers are still working on a duos mode. Because it’s three-player by default, I assumed that the game would be designed around teamwork and would actively reward prosocial behaviors, like base Elden Ring. I would argue that it’s not, and that’s why it’s very hard to have a good time in the game — especially if you’re playing with complete strangers.Problem number one: There’s no in-game communication system besides pinging certain locations on the map. Lack of chat options is a FromSoftware classic, and in most of these games, you don’t really need communication to understand what to do. Usually, you’re just summoned to help with a boss battle, and after it’s over, you’re done and you go back to your game. But in Nightreign, it’s three-player for the entire game, obviously, and it’s a match-based game, not a hundreds-of-hours RPG. Matches last 45 minutes and every second counts, which means you and your teammates need to be extremely organized throughout. The lack of communication hurts. But that’s not the only problem. Far from it.Problem number two: The ring of fire. This game is a combination of Elden Ring’s open world areas (which encourage slow, methodical exploration) and a Fortnite-esque ring of fire that closes in on you constantly (which means you absolutely shouldn’t be doing any slow, methodical exploration). There’s also a Diablo-esque loot system, but you better read those loot descriptions fast, because the fire is coming for you. There are randomized boss fights all over the map, but oops, you might not be able to complete them in time to collect runes from them, because that fire is closing in. There are also special upgrades that you can only get if you defeat these mid-game bosses all over the map, but you might barely even have time to read those descriptions of the special abilities and select one in time for… you guessed it… the fire rushing towards you.This second problem becomes even more stressful when you have two other people on your team alongside you. This game has not one but two different sprint buttons in it — a regular sprint, and a super-fast sprint that uses up stamina faster. That’s because, of course, you need to be running from that fire. But that means your teammates, and you, need to constantly be doing the equivalent of screaming “move, move, move” like a drill sergeant in an army movie. You will be unwittingly getting annoyed at your teammate who is spending too damn long looking at loot on the ground or at an upgrade tree. The fire is coming! Hurry the fuck up! Again, this is not a game design choice that rewards prosocial behaviors and instead makes you feel dragged down by the two teammates that you also desperately need to survive the bosses in this game. Even the “revive” process involves you inflicting damage on your teammate to bring them back to life (rather than a revive button or item), which is darkly hilarious, because you might also grow to desire hitting them due to how annoyed you might feel that they died during a super difficult fight. Which brings us to the third and final problem.Image: FromSoftwareThird problem: The randomization of the bosses and of the items. The thing about base Elden Ring is that you can figure out a boss and how it works (is it weak to fire? Holy damage? And so on) and then patiently build up a character who can deal with that problem. You can memorize that boss’ attack patterns. You can find a save point nearest to that boss and run it back over and over again until you get past it. These are all of the wonderful and rewarding parts of playing FromSoftware video games; these are also the moments when you might do all of those preparations and then think, “Actually, I want to also summon a complete stranger to help me with this boss because it’s still too freaking hard.” And then you can do that, too. None of that is the case in Nightreign, because everything is completely fucking random.The bosses, except for the very last boss in each area, are random. The loot is random. Do you have the right loot to fight the boss you’re facing right this second? You may very well not. Do your teammates have it? You might not even know; you don’t have a way to communicate with them, after all. Is the boss in this area way overleveled for you and your team? It won’t be obvious until you start hitting it, and once you do that, good luck escaping. And if your team does a complete wipe and everyone dies to that boss together, you don’t get to run back together from the nearest save point, having seen its attack patterns, ready to try again with teamwork in mind. Nope, instead you get to start all over again, except now with new randomized bosses and new randomized loot.In other games with randomized loot, like Diablo, or other roguelikes with random elements like Hades, the game is designed with down time in mind. When you’ve completed a fight in Diablo or Hades, you have infinite time to stand around and make decisions. There is no encroaching circle of fire forcing you to read item descriptions and ability trees quickly. There’s a reason for that; the decision-making is the most fun part of a game with randomized elements. Why would Nightreign take that away?All of these aspects of the game do feel less bad if you’re playing with two good friends on voice chat. But even in that scenario, the game is still really punishing, and again, not in a way that other FromSoftware games are punishing. It’s punishing because you need to spend the entire game running, looking at randomized loot as fast as you possibly can before making a snap (possibly bad) decision, running more, desperately encouraging your teammates to keep on running to keep up, warning your teammates about the encroaching flames about to kill them, and did I mention running? Is this a fun way to spend your weekly gamer night with two other adults who just worked a full-time job all day and maybe just wanted to have a nice time playing a video game together?Image: FromSoftware/Bandai NamcoI’ve had a review code for Nightreign for a while now, so I already was worried about these problems before the game launched, but now that it’s launched and I’m seeing early mixed reviews on Steam, I’m ready to commiserate and validate: Yes, this game really doesn’t feel like Elden Ring, and even after some of this stuff gets patched, it’s still fundamentally super different. And that’s not only because it’s multiplayer, but because the multiplayer just doesn’t feel like other multiplayer FromSoftware experiences. It feels like it’s designed not only for people who have two best friends with whom they play competitive games on a regular basis, but also specifically for people who live for thrills and speed — not the methodical, calculated experiences of other FromSoftware games.For all of those reasons, I’m really not sure how this is going to go for FromSoftware over time. Is this game going to eventually encourage some prosocial behaviors amongst players, against all odds? Will people slowly learn the best ways to get through different areas? Will there be a “meta” for working together that emerges over time?It seems possible, and since it’s only been one day, it’s way too early to tell. Various social norms will emerge in the player community, and hopefully they won’t be toxic ones. But I can tell from having already played the game that this is going to be an uphill climb for FromSoftware fans. It’s a very different game — and its specific form of difficulty is going to be a whole new variety for those fans to get used to. And like me, they might just decide they don’t really care for it.See More:
    0 Yorumlar 0 hisse senetleri 0 önizleme
CGShares https://cgshares.com