• How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests

    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says.
    #how #being #used #spread #misinformationand
    How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests
    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says. #how #being #used #spread #misinformationand
    How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests
    time.com
    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says.
    0 Yorumlar ·0 hisse senetleri ·0 önizleme
  • The Invisible Visual Effects Secrets of ‘Severance’ with ILM’s Eric Leven

    ILM teams with Ben Stiller and Apple TV+ to bring thousands of seamless visual effects shots to the hit drama’s second season.
    By Clayton Sandell
    There are mysterious and important secrets to be uncovered in the second season of the wildly popular Apple TV+ series Severance.
    About 3,500 of them are hiding in plain sight.
    That’s roughly the number of visual effects shots helping tell the Severance story over 10 gripping episodes in the latest season, a collaborative effort led by Industrial Light & Magic.
    ILM’s Eric Leven served as the Severance season two production visual effects supervisor. We asked him to help pull back the curtain on some of the show’s impressive digital artistry that most viewers will probably never notice.
    “This is the first show I’ve ever done where it’s nothing but invisible effects,” Leven tells ILM.com. “It’s a really different calculus because nobody talks about them. And if you’ve done them well, they are invisible to the naked eye.”
    With so many season two shots to choose from, Leven helped us narrow down a list of his favorite visual effects sequences to five.Before we dig in, a word of caution. This article contains plot spoilers for Severance.Severance tells the story of Mark Scout, department chief of the secretive Severed Floor located in the basement level of Lumon Industries, a multinational biotech corporation. Mark S., as he’s known to his co-workers, heads up Macrodata Refinement, a department where employees help categorize numbers without knowing the true purpose of their work. 
    Mark and his team – Helly R., Dylan G., and Irving B., have all undergone a surgical procedure to “sever” their personal lives from their work lives. The chip embedded in their brains effectively creates two personalities that are sometimes at odds: an “Innie” during Lumon office hours and an “Outie” at home.
    “This is the first show I’ve ever done where it’s nothing but invisible effects. It’s a really different calculus because nobody talks about them. And if you’ve done them well, they are invisible to the naked eye.”Eric Leven
    1. The Running ManThe season one finale ends on a major cliffhanger. Mark S. learns that his Outie’s wife, Gemma – believed killed in a car crash years ago – is actually alive somewhere inside the Lumon complex. Season two opens with Mark S. arriving at the Severed Floor in a desperate search for Gemma, who he only knows as her Innie persona, Ms. Casey.
    The fast-paced sequence is designed to look like a single, two-minute shot. It begins with the camera making a series of rapid and elaborate moves around a frantic Mark S. as he steps out of the elevator, into the Severed Floor lobby, and begins running through the hallways.
    “The nice thing about that sequence was that everyone knew it was going to be difficult and challenging,” Leven says, adding that executive producer and Episode 201 director, Ben Stiller, began by mapping out the hallway run with his team. Leven recommended that a previsualization sequence – provided by The Third Floor – would help the filmmakers refine their plan before cameras rolled.
    “While prevising it, we didn’t worry about how we would actually photograph anything. It was just, ‘These are the visuals we want to capture,’” Leven says. “‘What does it look like for this guy to run down this hallway for two minutes? We’ll figure out how to shoot it later.’”
    The previs process helped determine how best to shoot the sequence, and also informed which parts of the soundstage set would have to be digitally replaced. The first shot was captured by a camera mounted on a Bolt X Cinebot motion-control arm provided by The Garage production company. The size of the motion-control setup, however, meant it could not fit in the confined space of an elevator or the existing hallways.
    “We couldn’t actually shoot in the elevator,” Leven says. “The whole elevator section of the set was removed and was replaced with computer graphics.” In addition to the elevator, ILM artists replaced portions of the floor, furniture, and an entire lobby wall, even adding a reflection of Adam Scott into the elevator doors.
    As Scott begins running, he’s picked up by a second camera mounted on a more compact, stabilized gimbal that allows the operator to quickly run behind and sometimes in front of the actor as he darts down different hallways. ILM seamlessly combined the first two Mark S. plates in a 2D composite.
    “Part of that is the magic of the artists at ILM who are doing that blend. But I have to give credit to Adam Scott because he ran the same way in both cameras without really being instructed,” says Leven. “Lucky for us, he led with the same foot. He used the same arm. I remember seeing it on the set, and I did a quick-and-dirty blend right there and thought, ‘Oh my gosh, this is going to work.’ So it was really nice.”
    The action continues at a frenetic pace, ultimately combining ten different shots to complete the sequence.
    “We didn’t want the very standard sleight of hand that you’ve seen a lot where you do a wipe across the white hallway,” Leven explains. “We tried to vary that as much as possible because we didn’t want to give away the gag. So, there are times when the camera will wipe across a hallway, and it’s not a computer graphics wipe. We’d hide the wipe somewhere else.”
    A slightly more complicated illusion comes as the camera sweeps around Mark S. from back to front as he barrels down another long hallway. “There was no way to get the camera to spin around Mark while he is running because there’s physically not enough room for the camera there,” says Leven.
    To capture the shot, Adam Scott ran on a treadmill placed on a green screen stage as the camera maneuvered around him. At that point, the entire hallway environment is made with computer graphics. Artists even added a few extra frames of the actor to help connect one shot to the next, selling the illusion of a single continuous take. “We painted in a bit of Adam Scott running around the corner. So if you freeze and look through it, you’ll see a bit of his heel. He never completely clears the frame,” Leven points out.
    Leven says ILM also provided Ben Stiller with options when it came to digitally changing up the look of Lumon’s sterile hallways: sometimes adding extra doors, vents, or even switching door handles. “I think Ben was very excited about having this opportunity,” says Leven. “He had never had a complete, fully computer graphics version of these hallways before. And now he was able to do things that he was never able to do in season one.”.
    2. Let it SnowThe MDR team – Mark, Helly, Dylan, and Irving – unexpectedly find themselves in the snowy wilderness as part of a two-day Lumon Outdoor Retreat and Team-Building Occurrence, or ORTBO. 
    Exterior scenes were shot on location at Minnewaska State Park Preserve in New York. Throughout the ORTBO sequence, ILM performed substantial environment enhancements, making trees and landscapes appear far snowier than they were during the shoot. “It’s really nice to get the actors out there in the cold and see their breath,” Leven says. “It just wasn’t snowy during the shoot. Nearly every exterior shot was either replaced or enhanced with snow.”
    For a shot of Irving standing on a vast frozen lake, for example, virtually every element in the location plate – including an unfrozen lake, mountains, and trees behind actor John Turturro – was swapped out for a CG environment. Wide shots of a steep, rocky wall Irving must scale to reach his co-workers were also completely digital.
    Eventually, the MDR team discovers a waterfall that marks their arrival at a place called Woe’s Hollow. The location – the state park’s real-life Awosting Falls – also got extensive winter upgrades from ILM, including much more snow covering the ground and trees, an ice-covered pond, and hundreds of icicles clinging to the rocky walls. “To make it fit in the world of Severance, there’s a ton of work that has to happen,” Leven tells ILM.com..
    3. Welcome to LumonThe historic Bell Labs office complex, now known as Bell Works in Holmdel Township, New Jersey, stands in as the fictional Lumon Industries headquarters building.
    Exterior shots often underwent a significant digital metamorphosis, with artists transforming areas of green grass into snow-covered terrain, inserting a CG water tower, and rendering hundreds of 1980s-era cars to fill the parking lot.
    “We’re always adding cars, we’re always adding snow. We’re changing, subtly, the shape and the layout of the design,” says Leven. “We’re seeing new angles that we’ve never seen before. On the roof of Lumon, for example, the air conditioning units are specifically designed and created with computer graphics.”
    In real life, the complex is surrounded by dozens of houses, requiring the digital erasure of entire neighborhoods. “All of that is taken out,” Leven explains. “CG trees are put in, and new mountains are put in the background.”
    Episodes 202 and 203 feature several night scenes shot from outside the building looking in. In one sequence, a camera drone flying outside captured a long tracking shot of Helena Eaganmaking her way down a glass-enclosed walkway. The building’s atrium can be seen behind her, complete with a massive wall sculpture depicting company founder Kier Eagan.
    “We had to put the Kier sculpture in with the special lighting,” Leven reveals. “The entire atrium was computer graphics.” Artists completed the shot by adding CG reflections of the snowy parking lot to the side of the highly reflective building.
    “We have to replace what’s in the reflections because the real reflection is a parking lot with no snow or a parking lot with no cars,” explains Leven. “We’re often replacing all kinds of stuff that you wouldn’t think would need to be replaced.”
    Another nighttime scene shot from outside the building features Helena in a conference room overlooking the Lumon parking lot, which sits empty except for Mr. Milchickriding in on his motorcycle.
    “The top story, where she is standing, was practical,” says Leven, noting the shot was also captured using a drone hovering outside the window. “The second story below her was all computer graphics. Everything other than the building is computer graphics. They did shoot a motorcycle on location, getting as much practical reference as possible, but then it had to be digitally replaced after the fact to make it work with the rest of the shot.”.
    4. Time in MotionEpisode seven reveals that MDR’s progress is being monitored by four dopplegang-ish observers in a control room one floor below, revealed via a complex move that has the camera traveling downward through a mass of data cables.
    “They built an oversize cable run, and they shot with small probe lenses. Visual effects helped by blending several plates together,” explains Leven. “It was a collaboration between many different departments, which was really nice. Visual effects helped with stuff that just couldn’t be shot for real. For example, when the camera exits the thin holes of the metal grate at the bottom of the floor, that grate is computer graphics.”
    The sequence continues with a sweeping motion-control time-lapse shot that travels around the control-room observers in a spiral pattern, a feat pulled off with an ingenious mix of technical innovation and old-school sleight of hand.
    A previs sequence from The Third Floor laid out the camera move, but because the Bolt arm motion-control rig could only travel on a straight track and cover roughly one-quarter of the required distance, The Garage came up with a way to break the shot into multiple passes. The passes would later be stitched together into one seemingly uninterrupted movement.
    The symmetrical set design – including the four identical workstations – helped complete the illusion, along with a clever solution that kept the four actors in the correct position relative to the camera.
    “The camera would basically get to the end of the track,” Leven explains. “Then everybody would switch positions 90 degrees. Everyone would get out of their chairs and move. The camera would go back to one, and it would look like one continuous move around in a circle because the room is perfectly symmetrical, and everything in it is perfectly symmetrical. We were able to move the actors, and it looks like the camera was going all the way around the room.”
    The final motion-control move switches from time-lapse back to real time as the camera passes by a workstation and reveals Mr. Drummondand Dr. Mauerstanding behind it. Leven notes that each pass was completed with just one take.
    5. Mark vs. MarkThe Severance season two finale begins with an increasingly tense conversation between Innie Mark and Outie Mark, as the two personas use a handheld video camera to send recorded messages back and forth. Their encounter takes place at night in a Lumon birthing cabin equipped with a severance threshold that allows Mark S. to become Mark Scout each time he steps outside and onto the balcony.
    The cabin set was built on a soundstage at York Studios in the Bronx, New York. The balcony section consisted of the snowy floor, two chairs, and a railing, all surrounded by a blue screen background. Everything else was up to ILM to create.
    “It was nice to have Ben’s trust that we could just do it,” Leven remembers. “He said, ‘Hey, you’re just going to make this look great, right?’ We said, ‘Yeah, no problem.’”
    Artists filled in the scene with CG water, mountains, and moonlight to match the on-set lighting and of course, more snow. As Mark Scout steps onto the balcony, the camera pulls back to a wide shot, revealing the cabin’s full exterior. “They built a part of the exterior of the set. But everything other than the windows, even the railing, was digitally replaced,” Leven says.
    “It was nice to have Bentrust that we could just do it. He said, ‘Hey, you’re just going to make this look great, right?’ We said, ‘Yeah, no problem.’”Eric Leven
    Bonus: Marching Band MagicFinally, our bonus visual effects shot appears roughly halfway through the season finale. To celebrate Mark S. completing the Cold Harbor file, Mr. Milchick orders up a marching band from Lumon’s Choreography and Merriment department. Band members pour into MDR, but Leven says roughly 15 to 20 shots required adding a few more digital duplicates. “They wanted it to look like MDR was filled with band members. And for several of the shots there were holes in there. It just didn’t feel full enough,” he says.
    In a shot featuring a God’s-eye view of MDR, band members hold dozens of white cards above their heads, forming a giant illustration of a smiling Mark S. with text that reads “100%.”
    “For the top shot, we had to find a different stage because the MDR ceiling is only about eight feet tall,” recalls Leven. “And Ben really pushed to have it done practically, which I think was the right call because you’ve already got the band members, you’ve made the costumes, you’ve got the instruments. Let’s find a place to shoot it.”
    To get the high shot, the production team set up on an empty soundstage, placing signature MDR-green carpet on the floor. A simple foam core mock-up of the team’s desks occupied the center of the frame, with the finished CG versions added later.
    Even without the restraints of the practical MDR walls and ceiling, the camera could only get enough height to capture about 30 band members in the shot. So the scene was digitally expanded, with artists adding more green carpet, CG walls, and about 50 more band members.
    “We painted in new band members, extracting what we could from the practical plate,” Leven says. “We moved them around; we added more, just to make it look as full as Ben wanted.” Every single white card in the shot, Leven points out, is completely digital..
    A Mysterious and Important Collaboration
    With fans now fiercely debating the many twists and turns of Severance season two, Leven is quick to credit ILM’s two main visual effects collaborators: east side effects and Mango FX INC, as well as ILM studios and artists around the globe, including San Francisco, Vancouver, Singapore, Sydney, and Mumbai.
    Leven also believes Severance ultimately benefited from a successful creative partnership between ILM and Ben Stiller.
    “This one clicked so well, and it really made a difference on the show,” Leven says. “I think we both had the same sort of visual shorthand in terms of what we wanted things to look like. One of the things I love about working with Ben is that he’s obviously grounded in reality. He wants to shoot as much stuff real as possible, but then sometimes there’s a shot that will either come to him late or he just knows is impractical to shoot. And he knows that ILM can deliver it.”

    Clayton Sandell is a Star Wars author and enthusiast, TV storyteller, and a longtime fan of the creative people who keep Industrial Light & Magic and Skywalker Sound on the leading edge of visual effects and sound design. Follow him on InstagramBlueskyor X.
    #invisible #visual #effects #secrets #severance
    The Invisible Visual Effects Secrets of ‘Severance’ with ILM’s Eric Leven
    ILM teams with Ben Stiller and Apple TV+ to bring thousands of seamless visual effects shots to the hit drama’s second season. By Clayton Sandell There are mysterious and important secrets to be uncovered in the second season of the wildly popular Apple TV+ series Severance. About 3,500 of them are hiding in plain sight. That’s roughly the number of visual effects shots helping tell the Severance story over 10 gripping episodes in the latest season, a collaborative effort led by Industrial Light & Magic. ILM’s Eric Leven served as the Severance season two production visual effects supervisor. We asked him to help pull back the curtain on some of the show’s impressive digital artistry that most viewers will probably never notice. “This is the first show I’ve ever done where it’s nothing but invisible effects,” Leven tells ILM.com. “It’s a really different calculus because nobody talks about them. And if you’ve done them well, they are invisible to the naked eye.” With so many season two shots to choose from, Leven helped us narrow down a list of his favorite visual effects sequences to five.Before we dig in, a word of caution. This article contains plot spoilers for Severance.Severance tells the story of Mark Scout, department chief of the secretive Severed Floor located in the basement level of Lumon Industries, a multinational biotech corporation. Mark S., as he’s known to his co-workers, heads up Macrodata Refinement, a department where employees help categorize numbers without knowing the true purpose of their work.  Mark and his team – Helly R., Dylan G., and Irving B., have all undergone a surgical procedure to “sever” their personal lives from their work lives. The chip embedded in their brains effectively creates two personalities that are sometimes at odds: an “Innie” during Lumon office hours and an “Outie” at home. “This is the first show I’ve ever done where it’s nothing but invisible effects. It’s a really different calculus because nobody talks about them. And if you’ve done them well, they are invisible to the naked eye.”Eric Leven 1. The Running ManThe season one finale ends on a major cliffhanger. Mark S. learns that his Outie’s wife, Gemma – believed killed in a car crash years ago – is actually alive somewhere inside the Lumon complex. Season two opens with Mark S. arriving at the Severed Floor in a desperate search for Gemma, who he only knows as her Innie persona, Ms. Casey. The fast-paced sequence is designed to look like a single, two-minute shot. It begins with the camera making a series of rapid and elaborate moves around a frantic Mark S. as he steps out of the elevator, into the Severed Floor lobby, and begins running through the hallways. “The nice thing about that sequence was that everyone knew it was going to be difficult and challenging,” Leven says, adding that executive producer and Episode 201 director, Ben Stiller, began by mapping out the hallway run with his team. Leven recommended that a previsualization sequence – provided by The Third Floor – would help the filmmakers refine their plan before cameras rolled. “While prevising it, we didn’t worry about how we would actually photograph anything. It was just, ‘These are the visuals we want to capture,’” Leven says. “‘What does it look like for this guy to run down this hallway for two minutes? We’ll figure out how to shoot it later.’” The previs process helped determine how best to shoot the sequence, and also informed which parts of the soundstage set would have to be digitally replaced. The first shot was captured by a camera mounted on a Bolt X Cinebot motion-control arm provided by The Garage production company. The size of the motion-control setup, however, meant it could not fit in the confined space of an elevator or the existing hallways. “We couldn’t actually shoot in the elevator,” Leven says. “The whole elevator section of the set was removed and was replaced with computer graphics.” In addition to the elevator, ILM artists replaced portions of the floor, furniture, and an entire lobby wall, even adding a reflection of Adam Scott into the elevator doors. As Scott begins running, he’s picked up by a second camera mounted on a more compact, stabilized gimbal that allows the operator to quickly run behind and sometimes in front of the actor as he darts down different hallways. ILM seamlessly combined the first two Mark S. plates in a 2D composite. “Part of that is the magic of the artists at ILM who are doing that blend. But I have to give credit to Adam Scott because he ran the same way in both cameras without really being instructed,” says Leven. “Lucky for us, he led with the same foot. He used the same arm. I remember seeing it on the set, and I did a quick-and-dirty blend right there and thought, ‘Oh my gosh, this is going to work.’ So it was really nice.” The action continues at a frenetic pace, ultimately combining ten different shots to complete the sequence. “We didn’t want the very standard sleight of hand that you’ve seen a lot where you do a wipe across the white hallway,” Leven explains. “We tried to vary that as much as possible because we didn’t want to give away the gag. So, there are times when the camera will wipe across a hallway, and it’s not a computer graphics wipe. We’d hide the wipe somewhere else.” A slightly more complicated illusion comes as the camera sweeps around Mark S. from back to front as he barrels down another long hallway. “There was no way to get the camera to spin around Mark while he is running because there’s physically not enough room for the camera there,” says Leven. To capture the shot, Adam Scott ran on a treadmill placed on a green screen stage as the camera maneuvered around him. At that point, the entire hallway environment is made with computer graphics. Artists even added a few extra frames of the actor to help connect one shot to the next, selling the illusion of a single continuous take. “We painted in a bit of Adam Scott running around the corner. So if you freeze and look through it, you’ll see a bit of his heel. He never completely clears the frame,” Leven points out. Leven says ILM also provided Ben Stiller with options when it came to digitally changing up the look of Lumon’s sterile hallways: sometimes adding extra doors, vents, or even switching door handles. “I think Ben was very excited about having this opportunity,” says Leven. “He had never had a complete, fully computer graphics version of these hallways before. And now he was able to do things that he was never able to do in season one.”. 2. Let it SnowThe MDR team – Mark, Helly, Dylan, and Irving – unexpectedly find themselves in the snowy wilderness as part of a two-day Lumon Outdoor Retreat and Team-Building Occurrence, or ORTBO.  Exterior scenes were shot on location at Minnewaska State Park Preserve in New York. Throughout the ORTBO sequence, ILM performed substantial environment enhancements, making trees and landscapes appear far snowier than they were during the shoot. “It’s really nice to get the actors out there in the cold and see their breath,” Leven says. “It just wasn’t snowy during the shoot. Nearly every exterior shot was either replaced or enhanced with snow.” For a shot of Irving standing on a vast frozen lake, for example, virtually every element in the location plate – including an unfrozen lake, mountains, and trees behind actor John Turturro – was swapped out for a CG environment. Wide shots of a steep, rocky wall Irving must scale to reach his co-workers were also completely digital. Eventually, the MDR team discovers a waterfall that marks their arrival at a place called Woe’s Hollow. The location – the state park’s real-life Awosting Falls – also got extensive winter upgrades from ILM, including much more snow covering the ground and trees, an ice-covered pond, and hundreds of icicles clinging to the rocky walls. “To make it fit in the world of Severance, there’s a ton of work that has to happen,” Leven tells ILM.com.. 3. Welcome to LumonThe historic Bell Labs office complex, now known as Bell Works in Holmdel Township, New Jersey, stands in as the fictional Lumon Industries headquarters building. Exterior shots often underwent a significant digital metamorphosis, with artists transforming areas of green grass into snow-covered terrain, inserting a CG water tower, and rendering hundreds of 1980s-era cars to fill the parking lot. “We’re always adding cars, we’re always adding snow. We’re changing, subtly, the shape and the layout of the design,” says Leven. “We’re seeing new angles that we’ve never seen before. On the roof of Lumon, for example, the air conditioning units are specifically designed and created with computer graphics.” In real life, the complex is surrounded by dozens of houses, requiring the digital erasure of entire neighborhoods. “All of that is taken out,” Leven explains. “CG trees are put in, and new mountains are put in the background.” Episodes 202 and 203 feature several night scenes shot from outside the building looking in. In one sequence, a camera drone flying outside captured a long tracking shot of Helena Eaganmaking her way down a glass-enclosed walkway. The building’s atrium can be seen behind her, complete with a massive wall sculpture depicting company founder Kier Eagan. “We had to put the Kier sculpture in with the special lighting,” Leven reveals. “The entire atrium was computer graphics.” Artists completed the shot by adding CG reflections of the snowy parking lot to the side of the highly reflective building. “We have to replace what’s in the reflections because the real reflection is a parking lot with no snow or a parking lot with no cars,” explains Leven. “We’re often replacing all kinds of stuff that you wouldn’t think would need to be replaced.” Another nighttime scene shot from outside the building features Helena in a conference room overlooking the Lumon parking lot, which sits empty except for Mr. Milchickriding in on his motorcycle. “The top story, where she is standing, was practical,” says Leven, noting the shot was also captured using a drone hovering outside the window. “The second story below her was all computer graphics. Everything other than the building is computer graphics. They did shoot a motorcycle on location, getting as much practical reference as possible, but then it had to be digitally replaced after the fact to make it work with the rest of the shot.”. 4. Time in MotionEpisode seven reveals that MDR’s progress is being monitored by four dopplegang-ish observers in a control room one floor below, revealed via a complex move that has the camera traveling downward through a mass of data cables. “They built an oversize cable run, and they shot with small probe lenses. Visual effects helped by blending several plates together,” explains Leven. “It was a collaboration between many different departments, which was really nice. Visual effects helped with stuff that just couldn’t be shot for real. For example, when the camera exits the thin holes of the metal grate at the bottom of the floor, that grate is computer graphics.” The sequence continues with a sweeping motion-control time-lapse shot that travels around the control-room observers in a spiral pattern, a feat pulled off with an ingenious mix of technical innovation and old-school sleight of hand. A previs sequence from The Third Floor laid out the camera move, but because the Bolt arm motion-control rig could only travel on a straight track and cover roughly one-quarter of the required distance, The Garage came up with a way to break the shot into multiple passes. The passes would later be stitched together into one seemingly uninterrupted movement. The symmetrical set design – including the four identical workstations – helped complete the illusion, along with a clever solution that kept the four actors in the correct position relative to the camera. “The camera would basically get to the end of the track,” Leven explains. “Then everybody would switch positions 90 degrees. Everyone would get out of their chairs and move. The camera would go back to one, and it would look like one continuous move around in a circle because the room is perfectly symmetrical, and everything in it is perfectly symmetrical. We were able to move the actors, and it looks like the camera was going all the way around the room.” The final motion-control move switches from time-lapse back to real time as the camera passes by a workstation and reveals Mr. Drummondand Dr. Mauerstanding behind it. Leven notes that each pass was completed with just one take. 5. Mark vs. MarkThe Severance season two finale begins with an increasingly tense conversation between Innie Mark and Outie Mark, as the two personas use a handheld video camera to send recorded messages back and forth. Their encounter takes place at night in a Lumon birthing cabin equipped with a severance threshold that allows Mark S. to become Mark Scout each time he steps outside and onto the balcony. The cabin set was built on a soundstage at York Studios in the Bronx, New York. The balcony section consisted of the snowy floor, two chairs, and a railing, all surrounded by a blue screen background. Everything else was up to ILM to create. “It was nice to have Ben’s trust that we could just do it,” Leven remembers. “He said, ‘Hey, you’re just going to make this look great, right?’ We said, ‘Yeah, no problem.’” Artists filled in the scene with CG water, mountains, and moonlight to match the on-set lighting and of course, more snow. As Mark Scout steps onto the balcony, the camera pulls back to a wide shot, revealing the cabin’s full exterior. “They built a part of the exterior of the set. But everything other than the windows, even the railing, was digitally replaced,” Leven says. “It was nice to have Bentrust that we could just do it. He said, ‘Hey, you’re just going to make this look great, right?’ We said, ‘Yeah, no problem.’”Eric Leven Bonus: Marching Band MagicFinally, our bonus visual effects shot appears roughly halfway through the season finale. To celebrate Mark S. completing the Cold Harbor file, Mr. Milchick orders up a marching band from Lumon’s Choreography and Merriment department. Band members pour into MDR, but Leven says roughly 15 to 20 shots required adding a few more digital duplicates. “They wanted it to look like MDR was filled with band members. And for several of the shots there were holes in there. It just didn’t feel full enough,” he says. In a shot featuring a God’s-eye view of MDR, band members hold dozens of white cards above their heads, forming a giant illustration of a smiling Mark S. with text that reads “100%.” “For the top shot, we had to find a different stage because the MDR ceiling is only about eight feet tall,” recalls Leven. “And Ben really pushed to have it done practically, which I think was the right call because you’ve already got the band members, you’ve made the costumes, you’ve got the instruments. Let’s find a place to shoot it.” To get the high shot, the production team set up on an empty soundstage, placing signature MDR-green carpet on the floor. A simple foam core mock-up of the team’s desks occupied the center of the frame, with the finished CG versions added later. Even without the restraints of the practical MDR walls and ceiling, the camera could only get enough height to capture about 30 band members in the shot. So the scene was digitally expanded, with artists adding more green carpet, CG walls, and about 50 more band members. “We painted in new band members, extracting what we could from the practical plate,” Leven says. “We moved them around; we added more, just to make it look as full as Ben wanted.” Every single white card in the shot, Leven points out, is completely digital.. A Mysterious and Important Collaboration With fans now fiercely debating the many twists and turns of Severance season two, Leven is quick to credit ILM’s two main visual effects collaborators: east side effects and Mango FX INC, as well as ILM studios and artists around the globe, including San Francisco, Vancouver, Singapore, Sydney, and Mumbai. Leven also believes Severance ultimately benefited from a successful creative partnership between ILM and Ben Stiller. “This one clicked so well, and it really made a difference on the show,” Leven says. “I think we both had the same sort of visual shorthand in terms of what we wanted things to look like. One of the things I love about working with Ben is that he’s obviously grounded in reality. He wants to shoot as much stuff real as possible, but then sometimes there’s a shot that will either come to him late or he just knows is impractical to shoot. And he knows that ILM can deliver it.” — Clayton Sandell is a Star Wars author and enthusiast, TV storyteller, and a longtime fan of the creative people who keep Industrial Light & Magic and Skywalker Sound on the leading edge of visual effects and sound design. Follow him on InstagramBlueskyor X. #invisible #visual #effects #secrets #severance
    The Invisible Visual Effects Secrets of ‘Severance’ with ILM’s Eric Leven
    www.ilm.com
    ILM teams with Ben Stiller and Apple TV+ to bring thousands of seamless visual effects shots to the hit drama’s second season. By Clayton Sandell There are mysterious and important secrets to be uncovered in the second season of the wildly popular Apple TV+ series Severance (2022-present). About 3,500 of them are hiding in plain sight. That’s roughly the number of visual effects shots helping tell the Severance story over 10 gripping episodes in the latest season, a collaborative effort led by Industrial Light & Magic. ILM’s Eric Leven served as the Severance season two production visual effects supervisor. We asked him to help pull back the curtain on some of the show’s impressive digital artistry that most viewers will probably never notice. “This is the first show I’ve ever done where it’s nothing but invisible effects,” Leven tells ILM.com. “It’s a really different calculus because nobody talks about them. And if you’ve done them well, they are invisible to the naked eye.” With so many season two shots to choose from, Leven helped us narrow down a list of his favorite visual effects sequences to five. (As a bonus, we’ll also dive into an iconic season finale shot featuring the Mr. Milchick-led marching band.) Before we dig in, a word of caution. This article contains plot spoilers for Severance. (And in case you’re already wondering: No, the goats are not computer-graphics.) Severance tells the story of Mark Scout (Adam Scott), department chief of the secretive Severed Floor located in the basement level of Lumon Industries, a multinational biotech corporation. Mark S., as he’s known to his co-workers, heads up Macrodata Refinement (MDR), a department where employees help categorize numbers without knowing the true purpose of their work.  Mark and his team – Helly R. (Britt Lower), Dylan G. (Zach Cherry), and Irving B. (John Turturro), have all undergone a surgical procedure to “sever” their personal lives from their work lives. The chip embedded in their brains effectively creates two personalities that are sometimes at odds: an “Innie” during Lumon office hours and an “Outie” at home. “This is the first show I’ve ever done where it’s nothing but invisible effects. It’s a really different calculus because nobody talks about them. And if you’ve done them well, they are invisible to the naked eye.”Eric Leven 1. The Running Man (Episode 201: “Hello, Ms. Cobel”) The season one finale ends on a major cliffhanger. Mark S. learns that his Outie’s wife, Gemma – believed killed in a car crash years ago – is actually alive somewhere inside the Lumon complex. Season two opens with Mark S. arriving at the Severed Floor in a desperate search for Gemma, who he only knows as her Innie persona, Ms. Casey. The fast-paced sequence is designed to look like a single, two-minute shot. It begins with the camera making a series of rapid and elaborate moves around a frantic Mark S. as he steps out of the elevator, into the Severed Floor lobby, and begins running through the hallways. “The nice thing about that sequence was that everyone knew it was going to be difficult and challenging,” Leven says, adding that executive producer and Episode 201 director, Ben Stiller, began by mapping out the hallway run with his team. Leven recommended that a previsualization sequence – provided by The Third Floor – would help the filmmakers refine their plan before cameras rolled. “While prevising it, we didn’t worry about how we would actually photograph anything. It was just, ‘These are the visuals we want to capture,’” Leven says. “‘What does it look like for this guy to run down this hallway for two minutes? We’ll figure out how to shoot it later.’” The previs process helped determine how best to shoot the sequence, and also informed which parts of the soundstage set would have to be digitally replaced. The first shot was captured by a camera mounted on a Bolt X Cinebot motion-control arm provided by The Garage production company. The size of the motion-control setup, however, meant it could not fit in the confined space of an elevator or the existing hallways. “We couldn’t actually shoot in the elevator,” Leven says. “The whole elevator section of the set was removed and was replaced with computer graphics [CG].” In addition to the elevator, ILM artists replaced portions of the floor, furniture, and an entire lobby wall, even adding a reflection of Adam Scott into the elevator doors. As Scott begins running, he’s picked up by a second camera mounted on a more compact, stabilized gimbal that allows the operator to quickly run behind and sometimes in front of the actor as he darts down different hallways. ILM seamlessly combined the first two Mark S. plates in a 2D composite. “Part of that is the magic of the artists at ILM who are doing that blend. But I have to give credit to Adam Scott because he ran the same way in both cameras without really being instructed,” says Leven. “Lucky for us, he led with the same foot. He used the same arm. I remember seeing it on the set, and I did a quick-and-dirty blend right there and thought, ‘Oh my gosh, this is going to work.’ So it was really nice.” The action continues at a frenetic pace, ultimately combining ten different shots to complete the sequence. “We didn’t want the very standard sleight of hand that you’ve seen a lot where you do a wipe across the white hallway,” Leven explains. “We tried to vary that as much as possible because we didn’t want to give away the gag. So, there are times when the camera will wipe across a hallway, and it’s not a computer graphics wipe. We’d hide the wipe somewhere else.” A slightly more complicated illusion comes as the camera sweeps around Mark S. from back to front as he barrels down another long hallway. “There was no way to get the camera to spin around Mark while he is running because there’s physically not enough room for the camera there,” says Leven. To capture the shot, Adam Scott ran on a treadmill placed on a green screen stage as the camera maneuvered around him. At that point, the entire hallway environment is made with computer graphics. Artists even added a few extra frames of the actor to help connect one shot to the next, selling the illusion of a single continuous take. “We painted in a bit of Adam Scott running around the corner. So if you freeze and look through it, you’ll see a bit of his heel. He never completely clears the frame,” Leven points out. Leven says ILM also provided Ben Stiller with options when it came to digitally changing up the look of Lumon’s sterile hallways: sometimes adding extra doors, vents, or even switching door handles. “I think Ben was very excited about having this opportunity,” says Leven. “He had never had a complete, fully computer graphics version of these hallways before. And now he was able to do things that he was never able to do in season one.” (Credit: Apple TV+). 2. Let it Snow (Episode 204: “Woe’s Hollow”) The MDR team – Mark, Helly, Dylan, and Irving – unexpectedly find themselves in the snowy wilderness as part of a two-day Lumon Outdoor Retreat and Team-Building Occurrence, or ORTBO.  Exterior scenes were shot on location at Minnewaska State Park Preserve in New York. Throughout the ORTBO sequence, ILM performed substantial environment enhancements, making trees and landscapes appear far snowier than they were during the shoot. “It’s really nice to get the actors out there in the cold and see their breath,” Leven says. “It just wasn’t snowy during the shoot. Nearly every exterior shot was either replaced or enhanced with snow.” For a shot of Irving standing on a vast frozen lake, for example, virtually every element in the location plate – including an unfrozen lake, mountains, and trees behind actor John Turturro – was swapped out for a CG environment. Wide shots of a steep, rocky wall Irving must scale to reach his co-workers were also completely digital. Eventually, the MDR team discovers a waterfall that marks their arrival at a place called Woe’s Hollow. The location – the state park’s real-life Awosting Falls – also got extensive winter upgrades from ILM, including much more snow covering the ground and trees, an ice-covered pond, and hundreds of icicles clinging to the rocky walls. “To make it fit in the world of Severance, there’s a ton of work that has to happen,” Leven tells ILM.com. (Credit: Apple TV+). 3. Welcome to Lumon (Episode 202: “Goodbye, Mrs. Selvig” & Episode 203: “Who is Alive?”) The historic Bell Labs office complex, now known as Bell Works in Holmdel Township, New Jersey, stands in as the fictional Lumon Industries headquarters building. Exterior shots often underwent a significant digital metamorphosis, with artists transforming areas of green grass into snow-covered terrain, inserting a CG water tower, and rendering hundreds of 1980s-era cars to fill the parking lot. “We’re always adding cars, we’re always adding snow. We’re changing, subtly, the shape and the layout of the design,” says Leven. “We’re seeing new angles that we’ve never seen before. On the roof of Lumon, for example, the air conditioning units are specifically designed and created with computer graphics.” In real life, the complex is surrounded by dozens of houses, requiring the digital erasure of entire neighborhoods. “All of that is taken out,” Leven explains. “CG trees are put in, and new mountains are put in the background.” Episodes 202 and 203 feature several night scenes shot from outside the building looking in. In one sequence, a camera drone flying outside captured a long tracking shot of Helena Eagan (Helly R.’s Outie) making her way down a glass-enclosed walkway. The building’s atrium can be seen behind her, complete with a massive wall sculpture depicting company founder Kier Eagan. “We had to put the Kier sculpture in with the special lighting,” Leven reveals. “The entire atrium was computer graphics.” Artists completed the shot by adding CG reflections of the snowy parking lot to the side of the highly reflective building. “We have to replace what’s in the reflections because the real reflection is a parking lot with no snow or a parking lot with no cars,” explains Leven. “We’re often replacing all kinds of stuff that you wouldn’t think would need to be replaced.” Another nighttime scene shot from outside the building features Helena in a conference room overlooking the Lumon parking lot, which sits empty except for Mr. Milchick (Tramell Tillman) riding in on his motorcycle. “The top story, where she is standing, was practical,” says Leven, noting the shot was also captured using a drone hovering outside the window. “The second story below her was all computer graphics. Everything other than the building is computer graphics. They did shoot a motorcycle on location, getting as much practical reference as possible, but then it had to be digitally replaced after the fact to make it work with the rest of the shot.” (Credit: Apple TV+). 4. Time in Motion (Episode 207: “Chikhai Bardo”) Episode seven reveals that MDR’s progress is being monitored by four dopplegang-ish observers in a control room one floor below, revealed via a complex move that has the camera traveling downward through a mass of data cables. “They built an oversize cable run, and they shot with small probe lenses. Visual effects helped by blending several plates together,” explains Leven. “It was a collaboration between many different departments, which was really nice. Visual effects helped with stuff that just couldn’t be shot for real. For example, when the camera exits the thin holes of the metal grate at the bottom of the floor, that grate is computer graphics.” The sequence continues with a sweeping motion-control time-lapse shot that travels around the control-room observers in a spiral pattern, a feat pulled off with an ingenious mix of technical innovation and old-school sleight of hand. A previs sequence from The Third Floor laid out the camera move, but because the Bolt arm motion-control rig could only travel on a straight track and cover roughly one-quarter of the required distance, The Garage came up with a way to break the shot into multiple passes. The passes would later be stitched together into one seemingly uninterrupted movement. The symmetrical set design – including the four identical workstations – helped complete the illusion, along with a clever solution that kept the four actors in the correct position relative to the camera. “The camera would basically get to the end of the track,” Leven explains. “Then everybody would switch positions 90 degrees. Everyone would get out of their chairs and move. The camera would go back to one, and it would look like one continuous move around in a circle because the room is perfectly symmetrical, and everything in it is perfectly symmetrical. We were able to move the actors, and it looks like the camera was going all the way around the room.” The final motion-control move switches from time-lapse back to real time as the camera passes by a workstation and reveals Mr. Drummond (Ólafur Darri Ólafsson) and Dr. Mauer (Robby Benson) standing behind it. Leven notes that each pass was completed with just one take. 5. Mark vs. Mark (Episode 210: “Cold Harbor”) The Severance season two finale begins with an increasingly tense conversation between Innie Mark and Outie Mark, as the two personas use a handheld video camera to send recorded messages back and forth. Their encounter takes place at night in a Lumon birthing cabin equipped with a severance threshold that allows Mark S. to become Mark Scout each time he steps outside and onto the balcony. The cabin set was built on a soundstage at York Studios in the Bronx, New York. The balcony section consisted of the snowy floor, two chairs, and a railing, all surrounded by a blue screen background. Everything else was up to ILM to create. “It was nice to have Ben’s trust that we could just do it,” Leven remembers. “He said, ‘Hey, you’re just going to make this look great, right?’ We said, ‘Yeah, no problem.’” Artists filled in the scene with CG water, mountains, and moonlight to match the on-set lighting and of course, more snow. As Mark Scout steps onto the balcony, the camera pulls back to a wide shot, revealing the cabin’s full exterior. “They built a part of the exterior of the set. But everything other than the windows, even the railing, was digitally replaced,” Leven says. “It was nice to have Ben [Stiller’s] trust that we could just do it. He said, ‘Hey, you’re just going to make this look great, right?’ We said, ‘Yeah, no problem.’”Eric Leven Bonus: Marching Band Magic (Episode 210: “Cold Harbor”) Finally, our bonus visual effects shot appears roughly halfway through the season finale. To celebrate Mark S. completing the Cold Harbor file, Mr. Milchick orders up a marching band from Lumon’s Choreography and Merriment department. Band members pour into MDR, but Leven says roughly 15 to 20 shots required adding a few more digital duplicates. “They wanted it to look like MDR was filled with band members. And for several of the shots there were holes in there. It just didn’t feel full enough,” he says. In a shot featuring a God’s-eye view of MDR, band members hold dozens of white cards above their heads, forming a giant illustration of a smiling Mark S. with text that reads “100%.” “For the top shot, we had to find a different stage because the MDR ceiling is only about eight feet tall,” recalls Leven. “And Ben really pushed to have it done practically, which I think was the right call because you’ve already got the band members, you’ve made the costumes, you’ve got the instruments. Let’s find a place to shoot it.” To get the high shot, the production team set up on an empty soundstage, placing signature MDR-green carpet on the floor. A simple foam core mock-up of the team’s desks occupied the center of the frame, with the finished CG versions added later. Even without the restraints of the practical MDR walls and ceiling, the camera could only get enough height to capture about 30 band members in the shot. So the scene was digitally expanded, with artists adding more green carpet, CG walls, and about 50 more band members. “We painted in new band members, extracting what we could from the practical plate,” Leven says. “We moved them around; we added more, just to make it look as full as Ben wanted.” Every single white card in the shot, Leven points out, is completely digital. (Credit: Apple TV+). A Mysterious and Important Collaboration With fans now fiercely debating the many twists and turns of Severance season two, Leven is quick to credit ILM’s two main visual effects collaborators: east side effects and Mango FX INC, as well as ILM studios and artists around the globe, including San Francisco, Vancouver, Singapore, Sydney, and Mumbai. Leven also believes Severance ultimately benefited from a successful creative partnership between ILM and Ben Stiller. “This one clicked so well, and it really made a difference on the show,” Leven says. “I think we both had the same sort of visual shorthand in terms of what we wanted things to look like. One of the things I love about working with Ben is that he’s obviously grounded in reality. He wants to shoot as much stuff real as possible, but then sometimes there’s a shot that will either come to him late or he just knows is impractical to shoot. And he knows that ILM can deliver it.” — Clayton Sandell is a Star Wars author and enthusiast, TV storyteller, and a longtime fan of the creative people who keep Industrial Light & Magic and Skywalker Sound on the leading edge of visual effects and sound design. Follow him on Instagram (@claytonsandell) Bluesky (@claytonsandell.com) or X (@Clayton_Sandell).
    Like
    Love
    Wow
    Sad
    Angry
    682
    · 0 Yorumlar ·0 hisse senetleri ·0 önizleme
  • Study the Secrets of Early American Photography at This New Exhibition

    Study the Secrets of Early American Photography at This New Exhibition
    “The New Art: American Photography, 1839-1910” at the Metropolitan Museum of Art will feature more than 250 photographs

    Lillian Ali

    - Staff Contributor

    June 6, 2025

    This image, taken by an unknown photographer in 1905, is an example of a cyanotype.
    The Metropolitan Museum of Art, William L. Schaeffer Collection

    A new exhibition at the crossroads of art, history and technology chronicles the beginnings of early American photography.
    Titled “The New Art: American Photography, 1839-1910,” the show at the Metropolitan Museum of Art in New York City features more than 250 photographs that capture “the complexities of a nation in the midst of profound transformation,” says Max Hollein, the Met’s CEO, in a statement.
    Curator Jeff Rosenheim tells the Wall Street Journal’s William Meyers that the exhibition focuses “on how early artists used the different formats to record individuals and the built and natural environments surrounding them.”

    A daguerrotype from around 1850 depicts a woman wearing a tignon, a headcovering popular among Creole women of African descent.

    The Metropolitan Museum of Art, William L. Schaeffer Collection

    The oldest photographs on display are daguerreotypes, named for inventor Louis Daguerre, which were introduced in 1839 as the first publicly available form of photography. Creating a daguerreotype was a delicate, sometimes painstaking process that involved several chemical treatments and variable exposure times. The process yielded a sharply detailed picture on a silver background and was usually used for studio portraiture.
    The exhibition moves through the history of photography, from daguerreotypes and other photographs made on metal to those made on glass and, eventually, paper. It even features stereographs, two photos showing an object from slightly different points of view, creating an illusion of three-dimensionality.

    Installation view of "The New Art: American Photography, 1839-1910"

    Eugenia Tinsely / The Met

    Rosenheim believes that early photographic portraits empowered working-class Americans. “Photographic portraits play a role in people feeling like they could be a citizen,” Rosenheim tells the Guardian’s Veronica Esposito. “It’s a psychological, empowering thing to own your own likeness.”
    Photographs in the exhibition also spotlight key moments in American history. Items on view include a portrait of formerly enslaved individuals and an image of a conspirator in the assassination of Abraham Lincoln.
    The exhibition features big names in American photography, such as John Moran, who advocated for the recognition of photography as an art form, and Alice Austen, a pioneering landscape photographer.

    Group on Petria, Lake Mahopac​​​​​​, photographed in 1888 by Alice Austen

    The Metropolitan Museum of Art, William L. Schaeffer Collection

    Many of the photographs on display were taken by unknown artists. One of the most recent photos in the exhibition, taken by an unknown artist in 1905, is a cyanotype depicting figures tobogganing on a hill in Massachusetts. Cyanotypes were created by exposing a chemically treated paper to UV light, such as sunlight, yielding the blue pigment it was named for.
    Beyond portraits and landscapes, the exhibition features several enigmatic images, such as one of a boot placed in a roller skate and positioned on top of a stool. Rosenheim tells the Guardian that the mysterious photo “asks more questions than it answers.”

    An unknown photographer took this unconventional still life in the 1860s.

    The Metropolitan Museum of Art, William L. Schaeffer Collection

    “It’s very emblematic of the whole of 19th-century American photography,” he adds. The exhibition features photographs from across time and economic divides, with portraits of the working-class and wealthy alike.
    “The collection is just filled with the everyday stories of people,” Rosenheim tells the Guardian. “I don’t think painting can touch that.”
    “The New Art: American Photography, 1839-1910” is on view at the Metropolitan Museum of Art in New York City through July 20, 2025.

    Get the latest stories in your inbox every weekday.
    #study #secrets #early #american #photography
    Study the Secrets of Early American Photography at This New Exhibition
    Study the Secrets of Early American Photography at This New Exhibition “The New Art: American Photography, 1839-1910” at the Metropolitan Museum of Art will feature more than 250 photographs Lillian Ali - Staff Contributor June 6, 2025 This image, taken by an unknown photographer in 1905, is an example of a cyanotype. The Metropolitan Museum of Art, William L. Schaeffer Collection A new exhibition at the crossroads of art, history and technology chronicles the beginnings of early American photography. Titled “The New Art: American Photography, 1839-1910,” the show at the Metropolitan Museum of Art in New York City features more than 250 photographs that capture “the complexities of a nation in the midst of profound transformation,” says Max Hollein, the Met’s CEO, in a statement. Curator Jeff Rosenheim tells the Wall Street Journal’s William Meyers that the exhibition focuses “on how early artists used the different formats to record individuals and the built and natural environments surrounding them.” A daguerrotype from around 1850 depicts a woman wearing a tignon, a headcovering popular among Creole women of African descent. The Metropolitan Museum of Art, William L. Schaeffer Collection The oldest photographs on display are daguerreotypes, named for inventor Louis Daguerre, which were introduced in 1839 as the first publicly available form of photography. Creating a daguerreotype was a delicate, sometimes painstaking process that involved several chemical treatments and variable exposure times. The process yielded a sharply detailed picture on a silver background and was usually used for studio portraiture. The exhibition moves through the history of photography, from daguerreotypes and other photographs made on metal to those made on glass and, eventually, paper. It even features stereographs, two photos showing an object from slightly different points of view, creating an illusion of three-dimensionality. Installation view of "The New Art: American Photography, 1839-1910" Eugenia Tinsely / The Met Rosenheim believes that early photographic portraits empowered working-class Americans. “Photographic portraits play a role in people feeling like they could be a citizen,” Rosenheim tells the Guardian’s Veronica Esposito. “It’s a psychological, empowering thing to own your own likeness.” Photographs in the exhibition also spotlight key moments in American history. Items on view include a portrait of formerly enslaved individuals and an image of a conspirator in the assassination of Abraham Lincoln. The exhibition features big names in American photography, such as John Moran, who advocated for the recognition of photography as an art form, and Alice Austen, a pioneering landscape photographer. Group on Petria, Lake Mahopac​​​​​​, photographed in 1888 by Alice Austen The Metropolitan Museum of Art, William L. Schaeffer Collection Many of the photographs on display were taken by unknown artists. One of the most recent photos in the exhibition, taken by an unknown artist in 1905, is a cyanotype depicting figures tobogganing on a hill in Massachusetts. Cyanotypes were created by exposing a chemically treated paper to UV light, such as sunlight, yielding the blue pigment it was named for. Beyond portraits and landscapes, the exhibition features several enigmatic images, such as one of a boot placed in a roller skate and positioned on top of a stool. Rosenheim tells the Guardian that the mysterious photo “asks more questions than it answers.” An unknown photographer took this unconventional still life in the 1860s. The Metropolitan Museum of Art, William L. Schaeffer Collection “It’s very emblematic of the whole of 19th-century American photography,” he adds. The exhibition features photographs from across time and economic divides, with portraits of the working-class and wealthy alike. “The collection is just filled with the everyday stories of people,” Rosenheim tells the Guardian. “I don’t think painting can touch that.” “The New Art: American Photography, 1839-1910” is on view at the Metropolitan Museum of Art in New York City through July 20, 2025. Get the latest stories in your inbox every weekday. #study #secrets #early #american #photography
    Study the Secrets of Early American Photography at This New Exhibition
    www.smithsonianmag.com
    Study the Secrets of Early American Photography at This New Exhibition “The New Art: American Photography, 1839-1910” at the Metropolitan Museum of Art will feature more than 250 photographs Lillian Ali - Staff Contributor June 6, 2025 This image, taken by an unknown photographer in 1905, is an example of a cyanotype. The Metropolitan Museum of Art, William L. Schaeffer Collection A new exhibition at the crossroads of art, history and technology chronicles the beginnings of early American photography. Titled “The New Art: American Photography, 1839-1910,” the show at the Metropolitan Museum of Art in New York City features more than 250 photographs that capture “the complexities of a nation in the midst of profound transformation,” says Max Hollein, the Met’s CEO, in a statement. Curator Jeff Rosenheim tells the Wall Street Journal’s William Meyers that the exhibition focuses “on how early artists used the different formats to record individuals and the built and natural environments surrounding them.” A daguerrotype from around 1850 depicts a woman wearing a tignon, a headcovering popular among Creole women of African descent. The Metropolitan Museum of Art, William L. Schaeffer Collection The oldest photographs on display are daguerreotypes, named for inventor Louis Daguerre, which were introduced in 1839 as the first publicly available form of photography. Creating a daguerreotype was a delicate, sometimes painstaking process that involved several chemical treatments and variable exposure times. The process yielded a sharply detailed picture on a silver background and was usually used for studio portraiture. The exhibition moves through the history of photography, from daguerreotypes and other photographs made on metal to those made on glass and, eventually, paper. It even features stereographs, two photos showing an object from slightly different points of view, creating an illusion of three-dimensionality. Installation view of "The New Art: American Photography, 1839-1910" Eugenia Tinsely / The Met Rosenheim believes that early photographic portraits empowered working-class Americans. “Photographic portraits play a role in people feeling like they could be a citizen,” Rosenheim tells the Guardian’s Veronica Esposito. “It’s a psychological, empowering thing to own your own likeness.” Photographs in the exhibition also spotlight key moments in American history. Items on view include a portrait of formerly enslaved individuals and an image of a conspirator in the assassination of Abraham Lincoln. The exhibition features big names in American photography, such as John Moran, who advocated for the recognition of photography as an art form, and Alice Austen, a pioneering landscape photographer. Group on Petria, Lake Mahopac​​​​​​, photographed in 1888 by Alice Austen The Metropolitan Museum of Art, William L. Schaeffer Collection Many of the photographs on display were taken by unknown artists. One of the most recent photos in the exhibition, taken by an unknown artist in 1905, is a cyanotype depicting figures tobogganing on a hill in Massachusetts. Cyanotypes were created by exposing a chemically treated paper to UV light, such as sunlight, yielding the blue pigment it was named for. Beyond portraits and landscapes, the exhibition features several enigmatic images, such as one of a boot placed in a roller skate and positioned on top of a stool. Rosenheim tells the Guardian that the mysterious photo “asks more questions than it answers.” An unknown photographer took this unconventional still life in the 1860s. The Metropolitan Museum of Art, William L. Schaeffer Collection “It’s very emblematic of the whole of 19th-century American photography,” he adds. The exhibition features photographs from across time and economic divides, with portraits of the working-class and wealthy alike. “The collection is just filled with the everyday stories of people,” Rosenheim tells the Guardian. “I don’t think painting can touch that.” “The New Art: American Photography, 1839-1910” is on view at the Metropolitan Museum of Art in New York City through July 20, 2025. Get the latest stories in your inbox every weekday.
    Like
    Love
    Wow
    Angry
    Sad
    623
    · 0 Yorumlar ·0 hisse senetleri ·0 önizleme
  • The Legal Accountability of AI-Generated Deepfakes in Election Misinformation

    How Deepfakes Are Created

    Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever.

    Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵

    During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹.

    Deepfakes in Recent Elections: Examples

    Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³.

    Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸.

    Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike.

    U.S. Legal Framework and Accountability

    In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements.

    Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws.

    U.S. Legislation and Proposals

    Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage.

    At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes.

    Policy Recommendations: Balancing Integrity and Speech

    Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism.

    Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception.

    Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams.

    Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire.

    References:

    /.

    /.

    .

    .

    .

    .

    .

    .

    .

    /.

    .

    .

    /.

    /.

    .

    The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    #legal #accountability #aigenerated #deepfakes #election
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networksand autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping². Voice-cloning toolscan mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns³. Even mobile appslet users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network: A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processingto enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistenciesthat betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The callerwas later fined million by the FCC and indicted under existing telemarketing laws¹⁰¹¹.Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidatewon the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential adsdid change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering, and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation lawsalso leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commissionis preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commissionand Department of Justicehave signaled that purely commercial deepfakes could violate consumer protection or election laws. U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Actwould, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categorieswhile carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some statesdefine “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s lawas unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property, rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harmsmay be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original mediacould deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly availablehelps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: /. /. . . . . . . . /. . . /. /. . The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost. #legal #accountability #aigenerated #deepfakes #election
    The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
    www.marktechpost.com
    How Deepfakes Are Created Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video or audio of a target person. The two predominant AI architectures are generative adversarial networks (GANs) and autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator¹. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping (one estimate suggests DeepFaceLab was used for over 95% of known deepfake videos)². Voice-cloning tools (often built on similar AI principles) can mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars (turning typed scripts into lifelike “spokespeople”), which have already been misused in disinformation campaigns³. Even mobile apps (e.g. FaceApp, Zao) let users do basic face swaps in minutes⁴. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever. Diagram of a generative adversarial network (GAN): A generator network creates fake images from random input and a discriminator network distinguishes fakes from real examples. Over time the generator improves until its outputs “fool” the discriminator⁵ During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often then undergoes post-processing (color adjustments, lip-syncing refinements) to enhance believability¹. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistencies (blinking irregularities, audio artifacts or metadata mismatches) that betray a synthetic origin⁵. Authentication embeds markers before dissemination – for example, invisible watermarks or cryptographically signed metadata indicating authenticity⁶. The EU AI Act will soon mandate that major AI content providers embed machine-readable “watermark” signals in synthetic media⁷. However, as GAO notes, detection is an arms race – even a marked deepfake can sometimes evade notice – and labels alone don’t stop false narratives from spreading⁸⁹. Deepfakes in Recent Elections: Examples Deepfakes and AI-generated imagery already have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally-altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The caller (“Susan Anderson”) was later fined $6 million by the FCC and indicted under existing telemarketing laws¹⁰¹¹. (Importantly, FCC rules on robocalls applied regardless of AI: the perpetrator could have used a voice actor or recording instead.) Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts¹². The posts sparked media uproar, though analysts noted the same effect could have been achieved without AI (e.g., by photoshopping text on real images)¹². Similarly, Elon Musk’s X platform carried AI-generated clips, including a parody “Ad” depicting Vice-President Harris’s voice via an AI clone¹³. Beyond the U.S., deepfake-like content has appeared globally. In Indonesia’s 2024 presidential election, a video surfaced on social media in which a convincingly generated image of the late President Suharto appeared to endorse the candidate of the Golkar Party. Days later, the endorsed candidate (who is Suharto’s son-in-law) won the presidency¹⁴. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body – an incendiary fabrication designed to discredit her in the conservative Muslim-majority society¹⁵. Moldova’s pro-Western President Maia Sandu has been repeatedly targeted by AI-driven disinformation; one deepfake video falsely showed her resigning and endorsing a Russian-friendly party, apparently to sow distrust in the electoral process¹⁶. Even in Taiwan (amidst tensions with China), a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements – stoking confusion ahead of Taiwanese elections¹⁷. In Slovakia’s recent campaign, AI-generated audio mimicking the liberal party leader suggested he plotted vote-rigging and beer-price hikes – instantly spreading on social media just days before the election¹⁸. These examples show that deepfakes have touched diverse polities (from Bangladesh and Indonesia to Moldova, Slovakia, India and beyond), often aiming to undermine candidates or confuse voters¹⁵¹⁸. Notably, many of the most viral “deepfakes” in 2024 were actually circulated as obvious memes or claims, rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans, or cheaply doctored “cheapfakes” made with basic editing tools¹³¹⁹. For instance, social media was awash with memes of Kamala Harris in Soviet garb or of Black Americans holding Trump signs¹³, but these were typically used satirically, not meant to be secretly believed. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential ads (not necessarily AI-made) did change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns²⁰²¹ worldwide – a trend taken seriously by voters and regulators alike. U.S. Legal Framework and Accountability In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials, electioneering (such as the Bipartisan Campaign Reform Act, which requires disclaimers on political ads), and targeted statutes like criminal electioneering communications. In some cases ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act and mail/telemarketing fraud provisions, resulting in the $6M fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts. Voter intimidation laws (prohibiting threats or coercion) also leave a gap for non-threatening falsehoods about voting logistics or endorsements. Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has recently charged individuals under broad fraud statutes (e.g. for a plot to impersonate an aide to swing votes in 2020), and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commission (FEC) is preparing to enforce new rules: in April 2024 it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did. Similarly, the Federal Trade Commission (FTC) and Department of Justice (DOJ) have signaled that purely commercial deepfakes could violate consumer protection or election laws (for example, liability for mass false impersonation or for foreign-funded electioneering). U.S. Legislation and Proposals Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Act (H.R.5586 in the 118th Congress) would, among other things, impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads or certain categories (e.g. false claims about time/place/manner of voting) while carving out parody and news coverage. At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters (though Florida’s law exempts parody). Some states (like Texas) define “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints (e.g. Minnesota’s 2023 law was challenged for threatening injunctions against anyone “reasonably believed” to violate it). Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review, and Elon Musk’s company has sued under California’s law (which requires platforms to label or block deepfakes) as unconstitutional. In practice, most lawsuits have so far centered on defamation or intellectual property (for instance, a celebrity suing over a botched celebrity-deepfake video), rather than election-focused statutes. Policy Recommendations: Balancing Integrity and Speech Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. This could be a digital watermark or a visible disclaimer. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism. Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harms (e.g. automated phone calls impersonating voters, or videos claiming false polling information) may be defensible. Indeed, Florida already penalizes misuse of recordings in voter suppression. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception. Technical solutions can complement laws. Watermarking original media (as encouraged by the EU AI Act) could deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection – some supported by government research grants – should be deployed by fact-checkers and social platforms. Making detection datasets publicly available (e.g. the MIT OpenDATATEST) helps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have all recently committed to fighting election interference via AI, which may lead to joint norms or rapid response teams. Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped in 2019, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus policies should aim to deter malicious use without unduly chilling innovation or satire. References: https://www.security.org/resources/deepfake-statistics/. https://www.wired.com/story/synthesia-ai-deepfakes-it-control-riparbelli/. https://www.gao.gov/products/gao-24-107292. https://technologyquotient.freshfields.com/post/102jb19/eu-ai-act-unpacked-8-new-rules-on-deepfakes. https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem. https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd. https://www.lawfaremedia.org/article/new-and-old-tools-to-tackle-deepfakes-and-election-lies-in-2024. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena. https://firstamendment.mtsu.edu/article/political-deepfakes-and-elections/. https://www.ncsl.org/technology-and-communication/deceptive-audio-or-visual-media-deepfakes-2024-legislation. https://law.unh.edu/sites/default/files/media/2022/06/nagumotu_pp113-157.pdf. https://dfrlab.org/2024/10/02/brazil-election-ai-research/. https://dfrlab.org/2024/11/26/brazil-election-ai-deepfakes/. https://freedomhouse.org/article/eu-digital-services-act-win-transparency. The post The Legal Accountability of AI-Generated Deepfakes in Election Misinformation appeared first on MarkTechPost.
    0 Yorumlar ·0 hisse senetleri ·0 önizleme
  • The Download: sycophantic LLMs, and the AI Hype Index

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    This benchmark used Reddit’s AITA to test how much AI models suck up to us

    Back in April, OpenAI announced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed.A new benchmark called Elephant that measures the sycophantic tendencies of major AI models could help companies avoid these issues in the future. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. Read the full story.

    —Rhiannon Williams

    The AI Hype Index

    Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 Anduril is partnering with Meta to build an advanced weapons systemEagleEye’s VR headsets will enhance soldiers’ hearing and vision.+ Palmer Luckey wants to turn “warfighters into technomancers.”+ Luckey and Mark Zuckerberg have buried the hatchet, then.+ Palmer Luckey on the Pentagon’s future of mixed reality.2 A new Texas law requires app stores to verify users’ agesIt’s following in Utah’s footsteps, which passed a similar bill in March.+ Apple has pushed back on the law.3 What happens to DOGE now?It has lost its leader and a top lieutenant within the space of a week.+ Musk’s departure raises questions over how much power it will wield without him.+ DOGE’s tech takeover threatens the safety and stability of our critical data.4 NASA’s ambitions of a 2027 moon landing are looking less likelyIt needs SpaceX’s Starship, which keeps blowing up.+ Is there a viable alternative?5 Students are using AI to generate nude images of each otherIt’s a grave and growing problem that no one has a solution for.6 Google AI Overviews doesn’t know what year it isA year after its introduction, the feature is still making obvious mistakes.+ Google’s new AI-powered search isn’t fit to handle even basic queries.+ The company is pushing AI into everything. Will it pay off?+ Why Google’s AI Overviews gets things wrong.7 Hugging Face has created two humanoid robots The machines are open source, meaning anyone can build software for them.8 A popular vibe coding app has a major security flawDespite being notified about it months ago.+ Any AI coding program catering to amateurs faces the same issue.+ What is vibe coding, exactly?9 AI-generated videos are becoming way more realisticBut not when it comes to depicting gymnastics.10 This electronic tattoo measures your stress levelsConsider it a mood ring for your face.Quote of the day

    “I think finally we are seeing Apple being dragged into the child safety arena kicking and screaming.”

    —Sarah Gardner, CEO of child safety collective Heat Initiative, tells the Washington Post why Texas’ new app store law could signal a turning point for Apple.

    One more thing

    House-flipping algorithms are coming to your neighborhoodWhen Michael Maxson found his dream home in Nevada, it was not owned by a person but by a tech company, Zillow. When he went to take a look at the property, however, he discovered it damaged by a huge water leak. Despite offering to handle the costly repairs himself, Maxson discovered that the house had already been sold to another family, at the same price he had offered.During this time, Zillow lost more than million in three months of erratic house buying and unprofitable sales, leading analysts to question whether the entire tech-driven model is really viable. For the rest of us, a bigger question remains: Does the arrival of Silicon Valley tech point to a better future for housing or an industry disruption to fear? Read the full story.

    —Matthew Ponsford

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ A 100-mile real-time ultramarathon video game that lasts anywhere up to 27 hours is about as fun as it sounds.+ Here’s how edible glitter could help save the humble water vole from extinction.+ Cleaning massive statues is not for the faint-hearted+ When is a flute teacher not a flautist? When he’s a whistleblower.
    #download #sycophantic #llms #hype #index
    The Download: sycophantic LLMs, and the AI Hype Index
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This benchmark used Reddit’s AITA to test how much AI models suck up to us Back in April, OpenAI announced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed.A new benchmark called Elephant that measures the sycophantic tendencies of major AI models could help companies avoid these issues in the future. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. Read the full story. —Rhiannon Williams The AI Hype Index Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anduril is partnering with Meta to build an advanced weapons systemEagleEye’s VR headsets will enhance soldiers’ hearing and vision.+ Palmer Luckey wants to turn “warfighters into technomancers.”+ Luckey and Mark Zuckerberg have buried the hatchet, then.+ Palmer Luckey on the Pentagon’s future of mixed reality.2 A new Texas law requires app stores to verify users’ agesIt’s following in Utah’s footsteps, which passed a similar bill in March.+ Apple has pushed back on the law.3 What happens to DOGE now?It has lost its leader and a top lieutenant within the space of a week.+ Musk’s departure raises questions over how much power it will wield without him.+ DOGE’s tech takeover threatens the safety and stability of our critical data.4 NASA’s ambitions of a 2027 moon landing are looking less likelyIt needs SpaceX’s Starship, which keeps blowing up.+ Is there a viable alternative?5 Students are using AI to generate nude images of each otherIt’s a grave and growing problem that no one has a solution for.6 Google AI Overviews doesn’t know what year it isA year after its introduction, the feature is still making obvious mistakes.+ Google’s new AI-powered search isn’t fit to handle even basic queries.+ The company is pushing AI into everything. Will it pay off?+ Why Google’s AI Overviews gets things wrong.7 Hugging Face has created two humanoid robots The machines are open source, meaning anyone can build software for them.8 A popular vibe coding app has a major security flawDespite being notified about it months ago.+ Any AI coding program catering to amateurs faces the same issue.+ What is vibe coding, exactly?9 AI-generated videos are becoming way more realisticBut not when it comes to depicting gymnastics.10 This electronic tattoo measures your stress levelsConsider it a mood ring for your face.Quote of the day “I think finally we are seeing Apple being dragged into the child safety arena kicking and screaming.” —Sarah Gardner, CEO of child safety collective Heat Initiative, tells the Washington Post why Texas’ new app store law could signal a turning point for Apple. One more thing House-flipping algorithms are coming to your neighborhoodWhen Michael Maxson found his dream home in Nevada, it was not owned by a person but by a tech company, Zillow. When he went to take a look at the property, however, he discovered it damaged by a huge water leak. Despite offering to handle the costly repairs himself, Maxson discovered that the house had already been sold to another family, at the same price he had offered.During this time, Zillow lost more than million in three months of erratic house buying and unprofitable sales, leading analysts to question whether the entire tech-driven model is really viable. For the rest of us, a bigger question remains: Does the arrival of Silicon Valley tech point to a better future for housing or an industry disruption to fear? Read the full story. —Matthew Ponsford We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ A 100-mile real-time ultramarathon video game that lasts anywhere up to 27 hours is about as fun as it sounds.+ Here’s how edible glitter could help save the humble water vole from extinction.+ Cleaning massive statues is not for the faint-hearted+ When is a flute teacher not a flautist? When he’s a whistleblower. #download #sycophantic #llms #hype #index
    The Download: sycophantic LLMs, and the AI Hype Index
    www.technologyreview.com
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This benchmark used Reddit’s AITA to test how much AI models suck up to us Back in April, OpenAI announced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed.A new benchmark called Elephant that measures the sycophantic tendencies of major AI models could help companies avoid these issues in the future. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. Read the full story. —Rhiannon Williams The AI Hype Index Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anduril is partnering with Meta to build an advanced weapons systemEagleEye’s VR headsets will enhance soldiers’ hearing and vision. (WSJ $)+ Palmer Luckey wants to turn “warfighters into technomancers.” (TechCrunch)+ Luckey and Mark Zuckerberg have buried the hatchet, then. (Insider $)+ Palmer Luckey on the Pentagon’s future of mixed reality. (MIT Technology Review)2 A new Texas law requires app stores to verify users’ agesIt’s following in Utah’s footsteps, which passed a similar bill in March. (NYT $)+ Apple has pushed back on the law. (CNN)3 What happens to DOGE now?It has lost its leader and a top lieutenant within the space of a week. (WSJ $)+ Musk’s departure raises questions over how much power it will wield without him. (The Guardian)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review) 4 NASA’s ambitions of a 2027 moon landing are looking less likelyIt needs SpaceX’s Starship, which keeps blowing up. (WP $)+ Is there a viable alternative? (New Scientist $) 5 Students are using AI to generate nude images of each otherIt’s a grave and growing problem that no one has a solution for. (404 Media) 6 Google AI Overviews doesn’t know what year it isA year after its introduction, the feature is still making obvious mistakes. (Wired $)+ Google’s new AI-powered search isn’t fit to handle even basic queries. (NYT $)+ The company is pushing AI into everything. Will it pay off? (Vox)+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review) 7 Hugging Face has created two humanoid robots The machines are open source, meaning anyone can build software for them. (TechCrunch) 8 A popular vibe coding app has a major security flawDespite being notified about it months ago. (Semafor)+ Any AI coding program catering to amateurs faces the same issue. (The Information $)+ What is vibe coding, exactly? (MIT Technology Review) 9 AI-generated videos are becoming way more realisticBut not when it comes to depicting gymnastics. (Ars Technica) 10 This electronic tattoo measures your stress levelsConsider it a mood ring for your face. (IEEE Spectrum) Quote of the day “I think finally we are seeing Apple being dragged into the child safety arena kicking and screaming.” —Sarah Gardner, CEO of child safety collective Heat Initiative, tells the Washington Post why Texas’ new app store law could signal a turning point for Apple. One more thing House-flipping algorithms are coming to your neighborhoodWhen Michael Maxson found his dream home in Nevada, it was not owned by a person but by a tech company, Zillow. When he went to take a look at the property, however, he discovered it damaged by a huge water leak. Despite offering to handle the costly repairs himself, Maxson discovered that the house had already been sold to another family, at the same price he had offered.During this time, Zillow lost more than $420 million in three months of erratic house buying and unprofitable sales, leading analysts to question whether the entire tech-driven model is really viable. For the rest of us, a bigger question remains: Does the arrival of Silicon Valley tech point to a better future for housing or an industry disruption to fear? Read the full story. —Matthew Ponsford We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + A 100-mile real-time ultramarathon video game that lasts anywhere up to 27 hours is about as fun as it sounds.+ Here’s how edible glitter could help save the humble water vole from extinction.+ Cleaning massive statues is not for the faint-hearted ($)+ When is a flute teacher not a flautist? When he’s a whistleblower.
    0 Yorumlar ·0 hisse senetleri ·0 önizleme
  • Mickey 17: Stuart Penn – VFX Supervisor – Framestore

    Interviews

    Mickey 17: Stuart Penn – VFX Supervisor – Framestore

    By Vincent Frei - 27/05/2025

    When we last spoke with Stuart Penn in 2019, he walked us through Framestore’s work on Avengers: Endgame. He has since added The Aeronauts, Moon Knight, 1899, and Flite to his impressive list of credits.
    How did you get involved on this show?
    Soon after we had been awarded work, Director Bong visited our London Studio in May 2022 to meet us and share his vision with us.

    How was the sequences made by Framestore?
    Framestore was responsible for the development of the Baby and Mama Creepers. We worked on the shots of the Baby Creepers within the ship, and the Creepers in the caves and the ice crevasse. We developed the ice cave and crevasse environments, including a full-CG shot of Mickey falling into the crevasse.
    Within the ship we were also responsible for the cycler room with its lava pit, the human printer, a range of set extensions, Marshall’s beautiful rock and—one of my personal favourites—Pigeon Man’s spinning eyes. We also crafted the spacewalk sequence. All the work came out of our London and Mumbai studios.

    Bong Joon Ho has a very distinct visual storytelling style. How did you collaborate with him to ensure the VFX aligned with his vision, and were there any unexpected creative challenges that pushed the team in new directions?
    Director Bong was fun to work with, very collaborative and had a very clear vision of where the film was going. We had discussions before and during the shoot. While we were shooting, Director Bong would talk to us about the backstory of what the Creepers might be thinking that went beyond the scope of what we would see in the movie. This really helped with giving the creatures character.

    Can you walk us through the design and animation process for the baby and mother creepers? What references or inspirations helped shape their look and movement?
    Director Bong had been working with his creature designer, Heechul Jang, for many months before production started. We had kickoffs with Director Bong and Heechul that provided us with some of the best and most thought out concepts I think we’ve ever received. Director Bong set us the challenge of bringing them to life. We took the lead on the Baby and Mama Creepers and DNEG took on the Juniors.
    It’s fun to note that the energy and inquisitive nature of the Babies was inspired by reference footage of puppies.

    Were these creatures primarily CG, or was there any practical element involved? How did you ensure their integration into the live-action footage?
    They were all CG in the final film. On set we had a range of stuffies and mockups for actors to interact with and for lighting reference. People became quite attached to the baby creeper stuffies! For the Mama there was a head and large frame that was controlled by a team of puppeteers for eyeline and lighting reference.

    The ice cave has a very distinct visual style. How did you achieve the look of the ice, and what techniques were used to create the lighting and atmospheric effects inside the cave?
    I was sent to Iceland for a week to gather reference. I visited a range of ice cave locations—driving, hiking and being dropped by helicopter at various locations across a glacier. This reference provided the basis for the look of the caves. The ice was rendered fully refractive with interior volumes to create the structures. As it’s so computationally expensive to render we used tricks where we could reproject a limited number of fully rendered frames. This worked best on lock offs or small camera moves—others we just had to render full length.

    How were the scenes featuring multiple Mickeys filmed? Did you rely mostly on motion control, digital doubles, or a combination of techniques to seamlessly integrate the clones into the shots?
    For our shots it was mostly multiple plates relying on the skill of camera operators to match the framing and move and the comp work to either split frames or lift one of the Mickeys from a plate and replace the stand-in.

    Since Mickey’s clones are central to the story, what were the biggest VFX challenges in making them interact convincingly? Were there any specific techniques used to differentiate them visually or subtly show their progression over time?
    This really all came down to Robert Pattinson’s performances. He would usually be acting with his double for interaction and lighting. They would then switch positions and redo the performance. Robs could switch between the Mickey 17 and 18 characters with the assistance of quick hair and makeup changes.
    The prison environment seems to have a unique aesthetic and mood. How much of it was built practically, and how did VFX contribute to enhancing or extending the set?
    The foreground cells and storage containers were practical and everything beyond the fence was CG with a DMP overlay. The containers going off into the distance were carefully positioned and lit to enable you to feel the vast scale of the ship. We also replaced the fence in most shots with CG as it was easier than rotoing through the chain links.
    When Mickey is outside the ship, exposed to radiation, there are several extreme body effects, including his hand coming off. Can you discuss the challenges of creating these sequences, particularly in terms of digital prosthetics and damage simulations?
    Knocking Mickey’s hand off was quite straight forward due the speed of the impact. We started with a plate of the practical arm and glove and switch to a pre-sculpted CG glove and arm stump. The hand spinning off into the distance was hand animated to allow us to fully art direct the spin and trajectory. The final touch was to add and FX sim for the blood droplets.
    How did you balance realism and stylization in depicting the effects of radiation exposure? Were there real-world references or scientific studies that guided the look of the damage?
    Most of the radiation effects came from great make up and prosthetics—we just added some final touches such as an FX sim for a bursting blister. We tried a few different simulations based on work we had none on previous shows but ultimately dialed it back to something more subtle so it didn’t distract from the moment.

    Were there any memorable moments or scenes from the film that you found particularly rewarding or challenging to work on from a visual effects standpoint?
    There were a lot of quite diverse challenges. From creature work, environments, lava to a lot of ‘one off’ effects. The shot where the Creepers are pushing Mickey out into the snow was particularly challenging, with so many Creepers interacting with each other and Mickey, it took the combination of several animators and compositors to bring it together and integrate with the partial CG Mickey.

    Looking back on the project, what aspects of the visual effects are you most proud of?
    The baby creeper and the Ice cave environment.
    How long have you worked on this show?
    I worked on it for about 18 months
    What’s the VFX shots count?
    Framestore worked on 405 shots.
    A big thanks for your time.
    WANT TO KNOW MORE?Framestore: Dedicated page about Mickey 17 on Framestore website.
    © Vincent Frei – The Art of VFX – 2025
    #mickey #stuart #penn #vfx #supervisor
    Mickey 17: Stuart Penn – VFX Supervisor – Framestore
    Interviews Mickey 17: Stuart Penn – VFX Supervisor – Framestore By Vincent Frei - 27/05/2025 When we last spoke with Stuart Penn in 2019, he walked us through Framestore’s work on Avengers: Endgame. He has since added The Aeronauts, Moon Knight, 1899, and Flite to his impressive list of credits. How did you get involved on this show? Soon after we had been awarded work, Director Bong visited our London Studio in May 2022 to meet us and share his vision with us. How was the sequences made by Framestore? Framestore was responsible for the development of the Baby and Mama Creepers. We worked on the shots of the Baby Creepers within the ship, and the Creepers in the caves and the ice crevasse. We developed the ice cave and crevasse environments, including a full-CG shot of Mickey falling into the crevasse. Within the ship we were also responsible for the cycler room with its lava pit, the human printer, a range of set extensions, Marshall’s beautiful rock and—one of my personal favourites—Pigeon Man’s spinning eyes. We also crafted the spacewalk sequence. All the work came out of our London and Mumbai studios. Bong Joon Ho has a very distinct visual storytelling style. How did you collaborate with him to ensure the VFX aligned with his vision, and were there any unexpected creative challenges that pushed the team in new directions? Director Bong was fun to work with, very collaborative and had a very clear vision of where the film was going. We had discussions before and during the shoot. While we were shooting, Director Bong would talk to us about the backstory of what the Creepers might be thinking that went beyond the scope of what we would see in the movie. This really helped with giving the creatures character. Can you walk us through the design and animation process for the baby and mother creepers? What references or inspirations helped shape their look and movement? Director Bong had been working with his creature designer, Heechul Jang, for many months before production started. We had kickoffs with Director Bong and Heechul that provided us with some of the best and most thought out concepts I think we’ve ever received. Director Bong set us the challenge of bringing them to life. We took the lead on the Baby and Mama Creepers and DNEG took on the Juniors. It’s fun to note that the energy and inquisitive nature of the Babies was inspired by reference footage of puppies. Were these creatures primarily CG, or was there any practical element involved? How did you ensure their integration into the live-action footage? They were all CG in the final film. On set we had a range of stuffies and mockups for actors to interact with and for lighting reference. People became quite attached to the baby creeper stuffies! For the Mama there was a head and large frame that was controlled by a team of puppeteers for eyeline and lighting reference. The ice cave has a very distinct visual style. How did you achieve the look of the ice, and what techniques were used to create the lighting and atmospheric effects inside the cave? I was sent to Iceland for a week to gather reference. I visited a range of ice cave locations—driving, hiking and being dropped by helicopter at various locations across a glacier. This reference provided the basis for the look of the caves. The ice was rendered fully refractive with interior volumes to create the structures. As it’s so computationally expensive to render we used tricks where we could reproject a limited number of fully rendered frames. This worked best on lock offs or small camera moves—others we just had to render full length. How were the scenes featuring multiple Mickeys filmed? Did you rely mostly on motion control, digital doubles, or a combination of techniques to seamlessly integrate the clones into the shots? For our shots it was mostly multiple plates relying on the skill of camera operators to match the framing and move and the comp work to either split frames or lift one of the Mickeys from a plate and replace the stand-in. Since Mickey’s clones are central to the story, what were the biggest VFX challenges in making them interact convincingly? Were there any specific techniques used to differentiate them visually or subtly show their progression over time? This really all came down to Robert Pattinson’s performances. He would usually be acting with his double for interaction and lighting. They would then switch positions and redo the performance. Robs could switch between the Mickey 17 and 18 characters with the assistance of quick hair and makeup changes. The prison environment seems to have a unique aesthetic and mood. How much of it was built practically, and how did VFX contribute to enhancing or extending the set? The foreground cells and storage containers were practical and everything beyond the fence was CG with a DMP overlay. The containers going off into the distance were carefully positioned and lit to enable you to feel the vast scale of the ship. We also replaced the fence in most shots with CG as it was easier than rotoing through the chain links. When Mickey is outside the ship, exposed to radiation, there are several extreme body effects, including his hand coming off. Can you discuss the challenges of creating these sequences, particularly in terms of digital prosthetics and damage simulations? Knocking Mickey’s hand off was quite straight forward due the speed of the impact. We started with a plate of the practical arm and glove and switch to a pre-sculpted CG glove and arm stump. The hand spinning off into the distance was hand animated to allow us to fully art direct the spin and trajectory. The final touch was to add and FX sim for the blood droplets. How did you balance realism and stylization in depicting the effects of radiation exposure? Were there real-world references or scientific studies that guided the look of the damage? Most of the radiation effects came from great make up and prosthetics—we just added some final touches such as an FX sim for a bursting blister. We tried a few different simulations based on work we had none on previous shows but ultimately dialed it back to something more subtle so it didn’t distract from the moment. Were there any memorable moments or scenes from the film that you found particularly rewarding or challenging to work on from a visual effects standpoint? There were a lot of quite diverse challenges. From creature work, environments, lava to a lot of ‘one off’ effects. The shot where the Creepers are pushing Mickey out into the snow was particularly challenging, with so many Creepers interacting with each other and Mickey, it took the combination of several animators and compositors to bring it together and integrate with the partial CG Mickey. Looking back on the project, what aspects of the visual effects are you most proud of? The baby creeper and the Ice cave environment. How long have you worked on this show? I worked on it for about 18 months What’s the VFX shots count? Framestore worked on 405 shots. A big thanks for your time. WANT TO KNOW MORE?Framestore: Dedicated page about Mickey 17 on Framestore website. © Vincent Frei – The Art of VFX – 2025 #mickey #stuart #penn #vfx #supervisor
    Mickey 17: Stuart Penn – VFX Supervisor – Framestore
    www.artofvfx.com
    Interviews Mickey 17: Stuart Penn – VFX Supervisor – Framestore By Vincent Frei - 27/05/2025 When we last spoke with Stuart Penn in 2019, he walked us through Framestore’s work on Avengers: Endgame. He has since added The Aeronauts, Moon Knight, 1899, and Flite to his impressive list of credits. How did you get involved on this show? Soon after we had been awarded work, Director Bong visited our London Studio in May 2022 to meet us and share his vision with us. How was the sequences made by Framestore? Framestore was responsible for the development of the Baby and Mama Creepers. We worked on the shots of the Baby Creepers within the ship, and the Creepers in the caves and the ice crevasse. We developed the ice cave and crevasse environments, including a full-CG shot of Mickey falling into the crevasse. Within the ship we were also responsible for the cycler room with its lava pit, the human printer, a range of set extensions, Marshall’s beautiful rock and—one of my personal favourites—Pigeon Man’s spinning eyes. We also crafted the spacewalk sequence. All the work came out of our London and Mumbai studios. Bong Joon Ho has a very distinct visual storytelling style. How did you collaborate with him to ensure the VFX aligned with his vision, and were there any unexpected creative challenges that pushed the team in new directions? Director Bong was fun to work with, very collaborative and had a very clear vision of where the film was going. We had discussions before and during the shoot. While we were shooting, Director Bong would talk to us about the backstory of what the Creepers might be thinking that went beyond the scope of what we would see in the movie. This really helped with giving the creatures character. Can you walk us through the design and animation process for the baby and mother creepers? What references or inspirations helped shape their look and movement? Director Bong had been working with his creature designer, Heechul Jang, for many months before production started. We had kickoffs with Director Bong and Heechul that provided us with some of the best and most thought out concepts I think we’ve ever received. Director Bong set us the challenge of bringing them to life. We took the lead on the Baby and Mama Creepers and DNEG took on the Juniors. It’s fun to note that the energy and inquisitive nature of the Babies was inspired by reference footage of puppies. Were these creatures primarily CG, or was there any practical element involved? How did you ensure their integration into the live-action footage? They were all CG in the final film. On set we had a range of stuffies and mockups for actors to interact with and for lighting reference. People became quite attached to the baby creeper stuffies! For the Mama there was a head and large frame that was controlled by a team of puppeteers for eyeline and lighting reference. The ice cave has a very distinct visual style. How did you achieve the look of the ice, and what techniques were used to create the lighting and atmospheric effects inside the cave? I was sent to Iceland for a week to gather reference. I visited a range of ice cave locations—driving, hiking and being dropped by helicopter at various locations across a glacier. This reference provided the basis for the look of the caves. The ice was rendered fully refractive with interior volumes to create the structures. As it’s so computationally expensive to render we used tricks where we could reproject a limited number of fully rendered frames. This worked best on lock offs or small camera moves—others we just had to render full length. How were the scenes featuring multiple Mickeys filmed? Did you rely mostly on motion control, digital doubles, or a combination of techniques to seamlessly integrate the clones into the shots? For our shots it was mostly multiple plates relying on the skill of camera operators to match the framing and move and the comp work to either split frames or lift one of the Mickeys from a plate and replace the stand-in. Since Mickey’s clones are central to the story, what were the biggest VFX challenges in making them interact convincingly? Were there any specific techniques used to differentiate them visually or subtly show their progression over time? This really all came down to Robert Pattinson’s performances. He would usually be acting with his double for interaction and lighting. They would then switch positions and redo the performance. Robs could switch between the Mickey 17 and 18 characters with the assistance of quick hair and makeup changes. The prison environment seems to have a unique aesthetic and mood. How much of it was built practically, and how did VFX contribute to enhancing or extending the set? The foreground cells and storage containers were practical and everything beyond the fence was CG with a DMP overlay. The containers going off into the distance were carefully positioned and lit to enable you to feel the vast scale of the ship. We also replaced the fence in most shots with CG as it was easier than rotoing through the chain links. When Mickey is outside the ship, exposed to radiation, there are several extreme body effects, including his hand coming off. Can you discuss the challenges of creating these sequences, particularly in terms of digital prosthetics and damage simulations? Knocking Mickey’s hand off was quite straight forward due the speed of the impact. We started with a plate of the practical arm and glove and switch to a pre-sculpted CG glove and arm stump. The hand spinning off into the distance was hand animated to allow us to fully art direct the spin and trajectory. The final touch was to add and FX sim for the blood droplets. How did you balance realism and stylization in depicting the effects of radiation exposure? Were there real-world references or scientific studies that guided the look of the damage? Most of the radiation effects came from great make up and prosthetics—we just added some final touches such as an FX sim for a bursting blister. We tried a few different simulations based on work we had none on previous shows but ultimately dialed it back to something more subtle so it didn’t distract from the moment. Were there any memorable moments or scenes from the film that you found particularly rewarding or challenging to work on from a visual effects standpoint? There were a lot of quite diverse challenges. From creature work, environments, lava to a lot of ‘one off’ effects. The shot where the Creepers are pushing Mickey out into the snow was particularly challenging, with so many Creepers interacting with each other and Mickey, it took the combination of several animators and compositors to bring it together and integrate with the partial CG Mickey. Looking back on the project, what aspects of the visual effects are you most proud of? The baby creeper and the Ice cave environment. How long have you worked on this show? I worked on it for about 18 months What’s the VFX shots count? Framestore worked on 405 shots. A big thanks for your time. WANT TO KNOW MORE?Framestore: Dedicated page about Mickey 17 on Framestore website. © Vincent Frei – The Art of VFX – 2025
    0 Yorumlar ·0 hisse senetleri ·0 önizleme
  • Weekly Recap: APT Campaigns, Browser Hijacks, AI Malware, Cloud Breaches and Critical CVEs

    Cyber threats don't show up one at a time anymore. They're layered, planned, and often stay hidden until it's too late.
    For cybersecurity teams, the key isn't just reacting to alerts—it's spotting early signs of trouble before they become real threats. This update is designed to deliver clear, accurate insights based on real patterns and changes we can verify. With today's complex systems, we need focused analysis—not noise.
    What you'll see here isn't just a list of incidents, but a clear look at where control is being gained, lost, or quietly tested.
    Threat of the Week
    Lumma Stealer, DanaBot Operations Disrupted — A coalition of private sector companies and law enforcement agencies have taken down the infrastructure associated with Lumma Stealer and DanaBot. Charges have also been unsealed against 16 individuals for their alleged involvement in the development and deployment of DanaBot. The malware is equipped to siphon data from victim computers, hijack banking sessions, and steal device information. More uniquely, though, DanaBot has also been used for hacking campaigns that appear to be linked to Russian state-sponsored interests. All of that makes DanaBot a particularly clear example of how commodity malware has been repurposed by Russian state hackers for their own goals. In tandem, about 2,300 domains that acted as the command-and-controlbackbone for the Lumma information stealer have been seized, alongside taking down 300 servers and neutralizing 650 domains that were used to launch ransomware attacks. The actions against international cybercrime in the past few days constituted the latest phase of Operation Endgame.

    Get the Guide ➝

    Top News

    Threat Actors Use TikTok Videos to Distribute Stealers — While ClickFix has become a popular social engineering tactic to deliver malware, threat actors have been observed using artificial intelligence-generated videos uploaded to TikTok to deceive users into running malicious commands on their systems and deploy malware like Vidar and StealC under the guise of activating pirated version of Windows, Microsoft Office, CapCut, and Spotify. "This campaign highlights how attackers are ready to weaponize whichever social media platforms are currently popular to distribute malware," Trend Micro said.
    APT28 Hackers Target Western Logistics and Tech Firms — Several cybersecurity and intelligence agencies from Australia, Europe, and the United States issued a joint alert warning of a state-sponsored campaign orchestrated by the Russian state-sponsored threat actor APT28 targeting Western logistics entities and technology companies since 2022. "This cyber espionage-oriented campaign targeting logistics entities and technology companies uses a mix of previously disclosed TTPs and is likely connected to these actors' wide scale targeting of IP cameras in Ukraine and bordering NATO nations," the agencies said. The attacks are designed to steal sensitive information and maintain long-term persistence on compromised hosts.
    Chinese Threat Actors Exploit Ivanti EPMM Flaws — The China-nexus cyber espionage group tracked as UNC5221 has been attributed to the exploitation of a pair of security flaws affecting Ivanti Endpoint Manager Mobilesoftwareto target a wide range of sectors across Europe, North America, and the Asia-Pacific region. The intrusions leverage the vulnerabilities to obtain a reverse shell and drop malicious payloads like KrustyLoader, which is known to deliver the Sliver command-and-controlframework. "UNC5221 demonstrates a deep understanding of EPMM's internal architecture, repurposing legitimate system components for covert data exfiltration," EclecticIQ said. "Given EPMM's role in managing and pushing configurations to enterprise mobile devices, a successful exploitation could allow threat actors to remotely access, manipulate, or compromise thousands of managed devices across an organization."
    Over 100 Google Chrome Extensions Mimic Popular Tools — An unknown threat actor has been attributed to creating several malicious Chrome Browser extensions since February 2024 that masquerade as seemingly benign utilities such as DeepSeek, Manus, DeBank, FortiVPN, and Site Stats but incorporate covert functionality to exfiltrate data, receive commands, and execute arbitrary code. Links to these browser add-ons are hosted on specially crafted sites to which users are likely redirected to via phishing and social media posts. While the extensions appear to offer the advertised features, they also stealthily facilitate credential and cookie theft, session hijacking, ad injection, malicious redirects, traffic manipulation, and phishing via DOM manipulation. Several of these extensions have been taken down by Google.
    CISA Warns of SaaS Providers of Attacks Targeting Cloud Environments — The U.S. Cybersecurity and Infrastructure Security Agencywarned that SaaS companies are under threat from bad actors who are on the prowl for cloud applications with default configurations and elevated permissions. While the agency did not attribute the activity to a specific group, the advisory said enterprise backup platform Commvault is monitoring cyber threat activity targeting applications hosted in their Microsoft Azure cloud environment. "Threat actors may have accessed client secrets for Commvault'sMicrosoft 365backup software-as-a-servicesolution, hosted in Azure," CISA said. "This provided the threat actors with unauthorized access to Commvault's customers' M365 environments that have application secrets stored by Commvault."
    GitLab AI Coding Assistant Flaws Could Be Used to Inject Malicious Code — Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligenceassistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. The attack could also leak confidential issue data, such as zero-day vulnerability details. All that's required is for the attacker to instruct the chatbot to interact with a merge requestby taking advantage of the fact that GitLab Duo has extensive access to the platform. "By embedding hidden instructions in seemingly harmless project content, we were able to manipulate Duo's behavior, exfiltrate private source code, and demonstrate how AI responses can be leveraged for unintended and harmful outcomes," Legit Security said. One variation of the attack involved hiding a malicious instruction in an otherwise legitimate piece of source code, while another exploited Duo's parsing of markdown responses in real-time asynchronously. An attacker could leverage this behavior – that Duo begins rendering the output line by line rather than waiting until the entire response is generated and sending it all at once – to introduce malicious HTML code that can access sensitive data and exfiltrate the information to a remote server. The issues have been patched by GitLab following responsible disclosure.

    ‎️‍ Trending CVEs
    Software vulnerabilities remain one of the simplest—and most effective—entry points for attackers. Each week uncovers new flaws, and even small delays in patching can escalate into serious security incidents. Staying ahead means acting fast. Below is this week's list of high-risk vulnerabilities that demand attention. Review them carefully, apply updates without delay, and close the doors before they're forced open.
    This week's list includes — CVE-2025-34025, CVE-2025-34026, CVE-2025-34027, CVE-2025-30911, CVE-2024-57273, CVE-2024-54780, and CVE-2024-54779, CVE-2025-41229, CVE-2025-4322, CVE-2025-47934, CVE-2025-30193, CVE-2025-0993, CVE-2025-36535, CVE-2025-47949, CVE-2025-40775, CVE-2025-20152, CVE-2025-4123, CVE-2025-5063, CVE-2025-37899, CVE-2025-26817, CVE-2025-47947, CVE-2025-3078, CVE-2025-3079, and CVE-2025-4978.
    Around the Cyber World

    Sandworm Drops New Wiper in Ukraine — The Russia-aligned Sandworm group intensified destructive operations against Ukrainian energy companies, deploying a new wiper named ZEROLOT. "The infamous Sandworm group concentrated heavily on compromising Ukrainian energy infrastructure. In recent cases, it deployed the ZEROLOT wiper in Ukraine. For this, the attackers abused Active Directory Group Policy in the affected organizations," ESET Director of Threat Research, Jean-Ian Boutin, said. Another Russian hacking group, Gamaredon, remained the most prolific actor targeting the East European nation, enhancing malware obfuscation and introducing PteroBox, a file stealer leveraging Dropbox.
    Signal Says No to Recall — Signal has released a new version of its messaging app for Windows that, by default, blocks the ability of Windows to use Recall to periodically take screenshots of the app. "Although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that's displayed within privacy-preserving apps like Signal at risk," Signal said. "As a result, we are enabling an extra layer of protection by default on Windows 11 in order to help maintain the security of Signal Desktop on that platform even though it introduces some usability trade-offs. Microsoft has simply given us no other option." Microsoft began officially rolling out Recall last month.
    Russia Introduces New Law to Track Foreigners Using Their Smartphones — The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region. This includes gathering their real-time locations, fingerprint, face photograph, and residential information. "The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area," Vyacheslav Volodin, chairman of the State Duma, said. "If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairswithin three working days." A proposed four-year trial period begins on September 1, 2025, and runs until September 1, 2029.
    Dutch Government Passes Law to Criminalize Cyber Espionage — The Dutch government has approved a law criminalizing a wide range of espionage activities, including digital espionage, in an effort to protect national security, critical infrastructure, and high-quality technologies. Under the amended law, leaking sensitive information that is not classified as a state secret or engaging in activities on behalf of a foreign government that harm Dutch interests can also result in criminal charges. "Foreign governments are also interested in non-state-secret, sensitive information about a particular economic sector or about political decision-making," the government said. "Such information can be used to influence political processes, weaken the Dutch economy or play allies against each other. Espionage can also involve actions other than sharing information."
    Microsoft Announces Availability of Quantum-Resistant Algorithms to SymCrypt — Microsoft has revealed that it's making post-quantum cryptographycapabilities, including ML-KEM and ML-DSA, available for Windows Insiders, Canary Channel Build 27852 and higher, and Linux, SymCrypt-OpenSSL version 1.9.0. "This advancement will enable customers to commence their exploration and experimentation of PQC within their operational environments," Microsoft said. "By obtaining early access to PQC capabilities, organizations can proactively assess the compatibility, performance, and integration of these novel algorithms alongside their existing security infrastructure."
    New Malware DOUBLELOADER Uses ALCATRAZ for Obfuscation — The open-source obfuscator ALCATRAZ has been seen within a new generic loader dubbed DOUBLELOADER, which has been deployed alongside Rhadamanthys Stealer infections starting December 2024. The malware collects host information, requests an updated version of itself, and starts beaconing to a hardcoded IP addressstored within the binary. "Obfuscators such as ALCATRAZ end up increasing the complexity when triaging malware," Elastic Security Labs said. "Its main goal is to hinder binary analysis tools and increase the time of the reverse engineering process through different techniques; such as hiding the control flow or making decompilation hard to follow."
    New Formjacking Campaign Targets WooCommerce Sites — Cybersecurity researchers have detected a sophisticated formjacking campaign targeting WooCommerce sites. The malware, per Wordfence, injects a fake but professional-looking payment form into legitimate checkout processes and exfiltrates sensitive customer data to an external server. Further analysis has revealed that the infection likely originated from a compromised WordPress admin account, which was used to inject malicious JavaScript via a Simple Custom CSS and JS pluginthat allows administrators to add custom code. "Unlike traditional card skimmers that simply overlay existing forms, this variant carefully integrates with the WooCommerce site's design and payment workflow, making it particularly difficult for site owners and users to detect," the WordPress security company said. "The malware author repurposed the browser's localStorage mechanism – typically used by websites to remember user preferences – to silently store stolen data and maintain access even after page reloads or when navigating away from the checkout page."

    E.U. Sanctions Stark Industries — The European Unionhas announced sanctions against 21 individuals and six entities in Russia over its "destabilising actions" in the region. One of the sanctioned entities is Stark Industries, a bulletproof hosting provider that has been accused of acting as "enablers of various Russian state-sponsored and affiliated actors to conduct destabilising activities including, information manipulation interference and cyber attacks against the Union and third countries." The sanctions also target its CEO Iurie Neculiti and owner Ivan Neculiti. Stark Industries was previously spotlighted by independent cybersecurity journalist Brian Krebs, detailing its use in DDoS attacks in Ukraine and across Europe. In August 2024, Team Cymru said it discovered 25 Stark-assigned IP addresses used to host domains associated with FIN7 activities and that it had been working with Stark Industries for several months to identify and reduce abuse of their systems. The sanctions have also targeted Kremlin-backed manufacturers of drones and radio communication equipment used by the Russian military, as well as those involved in GPS signal jamming in Baltic states and disrupting civil aviation.
    The Mask APT Unmasked as Tied to the Spanish Government — The mysterious threat actor known as The Maskhas been identified as run by the Spanish government, according to a report published by TechCrunch, citing people who worked at Kaspersky at the time and had knowledge of the investigation. The Russian cybersecurity company first exposed the hacking group in 2014, linking it to highly sophisticated attacks since at least 2007 targeting high-profile organizations, such as governments, diplomatic entities, and research institutions. A majority of the group's attacks have targeted Cuba, followed by hundreds of victims in Brazil, Morocco, Spain, and Gibraltar. While Kaspersky has not publicly attributed it to a specific country, the latest revelation makes The Mask one of the few Western government hacking groups that has ever been discussed in public. This includes the Equation Group, the Lamberts, and Animal Farm.
    Social Engineering Scams Target Coinbase Users — Earlier this month, cryptocurrency exchange Coinbase revealed that it was the victim of a malicious attack perpetrated by unknown threat actors to breach its systems by bribing customer support agents in India and siphon funds from nearly 70,000 customers. According to Blockchain security firm SlowMist, Coinbase users have been the target of social engineering scams since the start of the year, bombarding with SMS messages claiming to be fake withdrawal requests and seeking their confirmation as part of a "sustained and organized scam campaign." The goal is to induce a false sense of urgency and trick them into calling a number, eventually convincing them to transfer the funds to a secure wallet with a seed phrase pre-generated by the attackers and ultimately drain the assets. It's assessed that the activities are primarily carried out by two groups: low-level skid attackers from the Com community and organized cybercrime groups based in India. "Using spoofed PBX phone systems, scammers impersonate Coinbase support and claim there's been 'unauthorized access' or 'suspicious withdrawals' on the user's account," SlowMist said. "They create a sense of urgency, then follow up with phishing emails or texts containing fake ticket numbers or 'recovery links.'"
    Delta Can Sue CrowdStrike Over July 2024 Mega Outage — Delta Air Lines, which had its systems crippled and almost 7,000 flights canceled in the wake of a massive outage caused by a faulty update issued by CrowdStrike in mid-July 2024, has been given the green light to pursue to its lawsuit against the cybersecurity company. A judge in the U.S. state of Georgia stating Delta can try to prove that CrowdStrike was grossly negligent by pushing a defective update to its Falcon software to customers. The update crashed 8.5 million Windows devices across the world. Crowdstrike previously claimed that the airline had rejected technical support offers both from itself and Microsoft. In a statement shared with Reuters, lawyers representing CrowdStrike said they were "confident the judge will find Delta's case has no merit, or will limit damages to the 'single-digit millions of dollars' under Georgia law." The development comes months after MGM Resorts International agreed to pay million to settle multiple class-action lawsuits related to a data breach in 2019 and a ransomware attack the company experienced in 2023.
    Storm-1516 Uses AI-Generated Media to Spread Disinformation — The Russian influence operation known as Storm-1516sought to spread narratives that undermined the European support for Ukraine by amplifying fabricated stories on X about European leaders using drugs while traveling by train to Kyiv for peace talks. One of the posts was subsequently shared by Russian state media and Maria Zakharova, a senior official in Russia's foreign ministry, as part of what has been described as a coordinated disinformation campaign by EclecticIQ. The activity is also notable for the use of synthetic content depicting French President Emmanuel Macron, U.K. Labour Party leader Keir Starmer, and German chancellor Friedrich Merz of drug possession during their return from Ukraine. "By attacking the reputation of these leaders, the campaign likely aimed to turn their own voters against them, using influence operationsto reduce public support for Ukraine by discrediting the politicians who back it," the Dutch threat intelligence firm said.
    Turkish Users Targeted by DBatLoader — AhnLab has disclosed details of a malware campaign that's distributing a malware loader called DBatLoadervia banking-themed banking emails, which then acts as a conduit to deliver SnakeKeylogger, an information stealer developed in .NET. "The DBatLoader malware distributed through phishing emails has the cunning behavior of exploiting normal processesthrough techniques such as DLL side-loading and injection for most of its behaviors, and it also utilizes normal processesfor behaviors such as file copying and changing policies," the company said.
    SEC SIM-Swapper Sentenced to 14 Months for SEC X Account Hack — A 26-year-old Alabama man, Eric Council Jr., has been sentenced to 14 months in prison and three years of supervised release for using SIM swapping attacks to breach the U.S. Securities and Exchange Commission'sofficial X account in January 2024 and falsely announced that the SEC approved BitcoinExchange Traded Funds. Council Jr.was arrested in October 2024 and pleaded guilty to the crime earlier this February. He has also been ordered to forfeit According to court documents, Council used his personal computer to search incriminating phrases such as "SECGOV hack," "telegram sim swap," "how can I know for sure if I am being investigated by the FBI," "What are the signs that you are under investigation by law enforcement or the FBI even if you have not been contacted by them," "what are some signs that the FBI is after you," "Verizon store list," "federal identity theft statute," and "how long does it take to delete telegram account."
    FBI Warns of Malicious Campaign Impersonating Government Officials — The U.S. Federal Bureau of Investigationis warning of a new campaign that involves malicious actors impersonating senior U.S. federal or state government officials and their contacts to target individuals since April 2025. "The malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts," the FBI said. "One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform." From there, the actor may present malware or introduce hyperlinks that lead intended targets to an actor-controlled site that steals login information.
    DICOM Flaw Enables Attackers to Embed Malicious Code Within Medical Image Files — Praetorian has released a proof-of-conceptfor a high-severity security flaw in Digital Imaging and Communications in Medicine, predominant file format for medical images, that enables attackers to embed malicious code within legitimate medical image files. CVE-2019-11687, originally disclosed in 2019 by Markel Picado Ortiz, stems from a design decision that allows arbitrary content at the start of the file, otherwise called the Preamble, which enables the creation of malicious polyglots. Codenamed ELFDICOM, the PoC extends the attack surface to Linux environments, making it a much more potent threat. As mitigations, it's advised to implement a DICOM preamble whitelist. "DICOM's file structure inherently allows arbitrary bytes at the beginning of the file, where Linux and most operating systems will look for magic bytes," Praetorian researcher Ryan Hennessee said. "would check a DICOM file's preamble before it is imported into the system. This would allow known good patterns, such as 'TIFF' magic bytes, or '\x00' null bytes, while files with the ELF magic bytes would be blocked."
    Cookie-Bite Attack Uses Chrome Extension to Steal Session Tokens — Cybersecurity researchers have demonstrated a new attack technique called Cookie-Bite that employs custom-made malicious browser extensions to steal "ESTAUTH" and "ESTSAUTHPERSISTNT" cookies in Microsoft Azure Entra ID and bypass multi-factor authentication. The attack has multiple moving parts to it: A custom Chrome extension that monitors authentication events and captures cookies; a PowerShell script that automates the extension deployment and ensures persistence; an exfiltration mechanism to send the cookies to a remote collection point; and a complementary extension to inject the captured cookies into the attacker's browser. "Threat actors often use infostealers to extract authentication tokens directly from a victim's machine or buy them directly through darkness markets, allowing adversaries to hijack active cloud sessions without triggering MFA," Varonis said. "By injecting these cookies while mimicking the victim's OS, browser, and network, attackers can evade Conditional Access Policiesand maintain persistent access." Authentication cookies can also be stolen using adversary-in-the-middlephishing kits in real-time, or using rogue browser extensions that request excessive permissions to interact with web sessions, modify page content, and extract stored authentication data. Once installed, the extension can access the browser's storage API, intercept network requests, or inject malicious JavaScript into active sessions to harvest real-time session cookies. "By leveraging stolen session cookies, an adversary can bypass authentication mechanisms, gaining seamless entry into cloud environments without requiring user credentials," Varonis said. "Beyond initial access, session hijacking can facilitate lateral movement across the tenant, allowing attackers to explore additional resources, access sensitive data, and escalate privileges by abusing existing permissions or misconfigured roles."

    Cybersecurity Webinars

    Non-Human Identities: The AI Backdoor You're Not Watching → AI agents rely on Non-Human Identitiesto function—but these are often left untracked and unsecured. As attackers shift focus to this hidden layer, the risk is growing fast. In this session, you'll learn how to find, secure, and monitor these identities before they're exploited. Join the webinar to understand the real risks behind AI adoption—and how to stay ahead.
    Inside the LOTS Playbook: How Hackers Stay Undetected → Attackers are using trusted sites to stay hidden. In this webinar, Zscaler experts share how they detect these stealthy LOTS attacks using insights from the world's largest security cloud. Join to learn how to spot hidden threats and improve your defense.

    Cybersecurity Tools

    ScriptSentry → It is a free tool that scans your environment for dangerous logon script misconfigurations—like plaintext credentials, insecure file/share permissions, and references to non-existent servers. These overlooked issues can enable lateral movement, privilege escalation, or even credential theft. ScriptSentry helps you quickly identify and fix them across large Active Directory environments.
    Aftermath → It is a Swift-based, open-source tool for macOS incident response. It collects forensic data—like logs, browser activity, and process info—from compromised systems, then analyzes it to build timelines and track infection paths. Deploy via MDM or run manually. Fast, lightweight, and ideal for post-incident investigation.
    AI Red Teaming Playground Labs → It is an open-source training suite with hands-on challenges designed to teach security professionals how to red team AI systems. Originally developed for Black Hat USA 2024, the labs cover prompt injections, safety bypasses, indirect attacks, and Responsible AI failures. Built on Chat Copilot and deployable via Docker, it's a practical resource for testing and understanding real-world AI vulnerabilities.

    Tip of the Week
    Review and Revoke Old OAuth App Permissions — They're Silent Backdoor → You've likely logged into apps using "Continue with Google," "Sign in with Microsoft," or GitHub/Twitter/Facebook logins. That's OAuth. But did you know many of those apps still have access to your data long after you stop using them?
    Why it matters:
    Even if you delete the app or forget it existed, it might still have ongoing access to your calendar, email, cloud files, or contact list — no password needed. If that third-party gets breached, your data is at risk.
    What to do:

    Go through your connected apps here:
    Google: myaccount.google.com/permissions
    Microsoft: account.live.com/consent/Manage
    GitHub: github.com/settings/applications
    Facebook: facebook.com/settings?tab=applications

    Revoke anything you don't actively use. It's a fast, silent cleanup — and it closes doors you didn't know were open.
    Conclusion
    Looking ahead, it's not just about tracking threats—it's about understanding what they reveal. Every tactic used, every system tested, points to deeper issues in how trust, access, and visibility are managed. As attackers adapt quickly, defenders need sharper awareness and faster response loops.
    The takeaways from this week aren't just technical—they speak to how teams prioritize risk, design safeguards, and make choices under pressure. Use these insights not just to react, but to rethink what "secure" really needs to mean in today's environment.

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    #weekly #recap #apt #campaigns #browser
    ⚡ Weekly Recap: APT Campaigns, Browser Hijacks, AI Malware, Cloud Breaches and Critical CVEs
    Cyber threats don't show up one at a time anymore. They're layered, planned, and often stay hidden until it's too late. For cybersecurity teams, the key isn't just reacting to alerts—it's spotting early signs of trouble before they become real threats. This update is designed to deliver clear, accurate insights based on real patterns and changes we can verify. With today's complex systems, we need focused analysis—not noise. What you'll see here isn't just a list of incidents, but a clear look at where control is being gained, lost, or quietly tested. ⚡ Threat of the Week Lumma Stealer, DanaBot Operations Disrupted — A coalition of private sector companies and law enforcement agencies have taken down the infrastructure associated with Lumma Stealer and DanaBot. Charges have also been unsealed against 16 individuals for their alleged involvement in the development and deployment of DanaBot. The malware is equipped to siphon data from victim computers, hijack banking sessions, and steal device information. More uniquely, though, DanaBot has also been used for hacking campaigns that appear to be linked to Russian state-sponsored interests. All of that makes DanaBot a particularly clear example of how commodity malware has been repurposed by Russian state hackers for their own goals. In tandem, about 2,300 domains that acted as the command-and-controlbackbone for the Lumma information stealer have been seized, alongside taking down 300 servers and neutralizing 650 domains that were used to launch ransomware attacks. The actions against international cybercrime in the past few days constituted the latest phase of Operation Endgame. Get the Guide ➝ 🔔 Top News Threat Actors Use TikTok Videos to Distribute Stealers — While ClickFix has become a popular social engineering tactic to deliver malware, threat actors have been observed using artificial intelligence-generated videos uploaded to TikTok to deceive users into running malicious commands on their systems and deploy malware like Vidar and StealC under the guise of activating pirated version of Windows, Microsoft Office, CapCut, and Spotify. "This campaign highlights how attackers are ready to weaponize whichever social media platforms are currently popular to distribute malware," Trend Micro said. APT28 Hackers Target Western Logistics and Tech Firms — Several cybersecurity and intelligence agencies from Australia, Europe, and the United States issued a joint alert warning of a state-sponsored campaign orchestrated by the Russian state-sponsored threat actor APT28 targeting Western logistics entities and technology companies since 2022. "This cyber espionage-oriented campaign targeting logistics entities and technology companies uses a mix of previously disclosed TTPs and is likely connected to these actors' wide scale targeting of IP cameras in Ukraine and bordering NATO nations," the agencies said. The attacks are designed to steal sensitive information and maintain long-term persistence on compromised hosts. Chinese Threat Actors Exploit Ivanti EPMM Flaws — The China-nexus cyber espionage group tracked as UNC5221 has been attributed to the exploitation of a pair of security flaws affecting Ivanti Endpoint Manager Mobilesoftwareto target a wide range of sectors across Europe, North America, and the Asia-Pacific region. The intrusions leverage the vulnerabilities to obtain a reverse shell and drop malicious payloads like KrustyLoader, which is known to deliver the Sliver command-and-controlframework. "UNC5221 demonstrates a deep understanding of EPMM's internal architecture, repurposing legitimate system components for covert data exfiltration," EclecticIQ said. "Given EPMM's role in managing and pushing configurations to enterprise mobile devices, a successful exploitation could allow threat actors to remotely access, manipulate, or compromise thousands of managed devices across an organization." Over 100 Google Chrome Extensions Mimic Popular Tools — An unknown threat actor has been attributed to creating several malicious Chrome Browser extensions since February 2024 that masquerade as seemingly benign utilities such as DeepSeek, Manus, DeBank, FortiVPN, and Site Stats but incorporate covert functionality to exfiltrate data, receive commands, and execute arbitrary code. Links to these browser add-ons are hosted on specially crafted sites to which users are likely redirected to via phishing and social media posts. While the extensions appear to offer the advertised features, they also stealthily facilitate credential and cookie theft, session hijacking, ad injection, malicious redirects, traffic manipulation, and phishing via DOM manipulation. Several of these extensions have been taken down by Google. CISA Warns of SaaS Providers of Attacks Targeting Cloud Environments — The U.S. Cybersecurity and Infrastructure Security Agencywarned that SaaS companies are under threat from bad actors who are on the prowl for cloud applications with default configurations and elevated permissions. While the agency did not attribute the activity to a specific group, the advisory said enterprise backup platform Commvault is monitoring cyber threat activity targeting applications hosted in their Microsoft Azure cloud environment. "Threat actors may have accessed client secrets for Commvault'sMicrosoft 365backup software-as-a-servicesolution, hosted in Azure," CISA said. "This provided the threat actors with unauthorized access to Commvault's customers' M365 environments that have application secrets stored by Commvault." GitLab AI Coding Assistant Flaws Could Be Used to Inject Malicious Code — Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligenceassistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. The attack could also leak confidential issue data, such as zero-day vulnerability details. All that's required is for the attacker to instruct the chatbot to interact with a merge requestby taking advantage of the fact that GitLab Duo has extensive access to the platform. "By embedding hidden instructions in seemingly harmless project content, we were able to manipulate Duo's behavior, exfiltrate private source code, and demonstrate how AI responses can be leveraged for unintended and harmful outcomes," Legit Security said. One variation of the attack involved hiding a malicious instruction in an otherwise legitimate piece of source code, while another exploited Duo's parsing of markdown responses in real-time asynchronously. An attacker could leverage this behavior – that Duo begins rendering the output line by line rather than waiting until the entire response is generated and sending it all at once – to introduce malicious HTML code that can access sensitive data and exfiltrate the information to a remote server. The issues have been patched by GitLab following responsible disclosure. ‎️‍🔥 Trending CVEs Software vulnerabilities remain one of the simplest—and most effective—entry points for attackers. Each week uncovers new flaws, and even small delays in patching can escalate into serious security incidents. Staying ahead means acting fast. Below is this week's list of high-risk vulnerabilities that demand attention. Review them carefully, apply updates without delay, and close the doors before they're forced open. This week's list includes — CVE-2025-34025, CVE-2025-34026, CVE-2025-34027, CVE-2025-30911, CVE-2024-57273, CVE-2024-54780, and CVE-2024-54779, CVE-2025-41229, CVE-2025-4322, CVE-2025-47934, CVE-2025-30193, CVE-2025-0993, CVE-2025-36535, CVE-2025-47949, CVE-2025-40775, CVE-2025-20152, CVE-2025-4123, CVE-2025-5063, CVE-2025-37899, CVE-2025-26817, CVE-2025-47947, CVE-2025-3078, CVE-2025-3079, and CVE-2025-4978. 📰 Around the Cyber World Sandworm Drops New Wiper in Ukraine — The Russia-aligned Sandworm group intensified destructive operations against Ukrainian energy companies, deploying a new wiper named ZEROLOT. "The infamous Sandworm group concentrated heavily on compromising Ukrainian energy infrastructure. In recent cases, it deployed the ZEROLOT wiper in Ukraine. For this, the attackers abused Active Directory Group Policy in the affected organizations," ESET Director of Threat Research, Jean-Ian Boutin, said. Another Russian hacking group, Gamaredon, remained the most prolific actor targeting the East European nation, enhancing malware obfuscation and introducing PteroBox, a file stealer leveraging Dropbox. Signal Says No to Recall — Signal has released a new version of its messaging app for Windows that, by default, blocks the ability of Windows to use Recall to periodically take screenshots of the app. "Although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that's displayed within privacy-preserving apps like Signal at risk," Signal said. "As a result, we are enabling an extra layer of protection by default on Windows 11 in order to help maintain the security of Signal Desktop on that platform even though it introduces some usability trade-offs. Microsoft has simply given us no other option." Microsoft began officially rolling out Recall last month. Russia Introduces New Law to Track Foreigners Using Their Smartphones — The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region. This includes gathering their real-time locations, fingerprint, face photograph, and residential information. "The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area," Vyacheslav Volodin, chairman of the State Duma, said. "If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairswithin three working days." A proposed four-year trial period begins on September 1, 2025, and runs until September 1, 2029. Dutch Government Passes Law to Criminalize Cyber Espionage — The Dutch government has approved a law criminalizing a wide range of espionage activities, including digital espionage, in an effort to protect national security, critical infrastructure, and high-quality technologies. Under the amended law, leaking sensitive information that is not classified as a state secret or engaging in activities on behalf of a foreign government that harm Dutch interests can also result in criminal charges. "Foreign governments are also interested in non-state-secret, sensitive information about a particular economic sector or about political decision-making," the government said. "Such information can be used to influence political processes, weaken the Dutch economy or play allies against each other. Espionage can also involve actions other than sharing information." Microsoft Announces Availability of Quantum-Resistant Algorithms to SymCrypt — Microsoft has revealed that it's making post-quantum cryptographycapabilities, including ML-KEM and ML-DSA, available for Windows Insiders, Canary Channel Build 27852 and higher, and Linux, SymCrypt-OpenSSL version 1.9.0. "This advancement will enable customers to commence their exploration and experimentation of PQC within their operational environments," Microsoft said. "By obtaining early access to PQC capabilities, organizations can proactively assess the compatibility, performance, and integration of these novel algorithms alongside their existing security infrastructure." New Malware DOUBLELOADER Uses ALCATRAZ for Obfuscation — The open-source obfuscator ALCATRAZ has been seen within a new generic loader dubbed DOUBLELOADER, which has been deployed alongside Rhadamanthys Stealer infections starting December 2024. The malware collects host information, requests an updated version of itself, and starts beaconing to a hardcoded IP addressstored within the binary. "Obfuscators such as ALCATRAZ end up increasing the complexity when triaging malware," Elastic Security Labs said. "Its main goal is to hinder binary analysis tools and increase the time of the reverse engineering process through different techniques; such as hiding the control flow or making decompilation hard to follow." New Formjacking Campaign Targets WooCommerce Sites — Cybersecurity researchers have detected a sophisticated formjacking campaign targeting WooCommerce sites. The malware, per Wordfence, injects a fake but professional-looking payment form into legitimate checkout processes and exfiltrates sensitive customer data to an external server. Further analysis has revealed that the infection likely originated from a compromised WordPress admin account, which was used to inject malicious JavaScript via a Simple Custom CSS and JS pluginthat allows administrators to add custom code. "Unlike traditional card skimmers that simply overlay existing forms, this variant carefully integrates with the WooCommerce site's design and payment workflow, making it particularly difficult for site owners and users to detect," the WordPress security company said. "The malware author repurposed the browser's localStorage mechanism – typically used by websites to remember user preferences – to silently store stolen data and maintain access even after page reloads or when navigating away from the checkout page." E.U. Sanctions Stark Industries — The European Unionhas announced sanctions against 21 individuals and six entities in Russia over its "destabilising actions" in the region. One of the sanctioned entities is Stark Industries, a bulletproof hosting provider that has been accused of acting as "enablers of various Russian state-sponsored and affiliated actors to conduct destabilising activities including, information manipulation interference and cyber attacks against the Union and third countries." The sanctions also target its CEO Iurie Neculiti and owner Ivan Neculiti. Stark Industries was previously spotlighted by independent cybersecurity journalist Brian Krebs, detailing its use in DDoS attacks in Ukraine and across Europe. In August 2024, Team Cymru said it discovered 25 Stark-assigned IP addresses used to host domains associated with FIN7 activities and that it had been working with Stark Industries for several months to identify and reduce abuse of their systems. The sanctions have also targeted Kremlin-backed manufacturers of drones and radio communication equipment used by the Russian military, as well as those involved in GPS signal jamming in Baltic states and disrupting civil aviation. The Mask APT Unmasked as Tied to the Spanish Government — The mysterious threat actor known as The Maskhas been identified as run by the Spanish government, according to a report published by TechCrunch, citing people who worked at Kaspersky at the time and had knowledge of the investigation. The Russian cybersecurity company first exposed the hacking group in 2014, linking it to highly sophisticated attacks since at least 2007 targeting high-profile organizations, such as governments, diplomatic entities, and research institutions. A majority of the group's attacks have targeted Cuba, followed by hundreds of victims in Brazil, Morocco, Spain, and Gibraltar. While Kaspersky has not publicly attributed it to a specific country, the latest revelation makes The Mask one of the few Western government hacking groups that has ever been discussed in public. This includes the Equation Group, the Lamberts, and Animal Farm. Social Engineering Scams Target Coinbase Users — Earlier this month, cryptocurrency exchange Coinbase revealed that it was the victim of a malicious attack perpetrated by unknown threat actors to breach its systems by bribing customer support agents in India and siphon funds from nearly 70,000 customers. According to Blockchain security firm SlowMist, Coinbase users have been the target of social engineering scams since the start of the year, bombarding with SMS messages claiming to be fake withdrawal requests and seeking their confirmation as part of a "sustained and organized scam campaign." The goal is to induce a false sense of urgency and trick them into calling a number, eventually convincing them to transfer the funds to a secure wallet with a seed phrase pre-generated by the attackers and ultimately drain the assets. It's assessed that the activities are primarily carried out by two groups: low-level skid attackers from the Com community and organized cybercrime groups based in India. "Using spoofed PBX phone systems, scammers impersonate Coinbase support and claim there's been 'unauthorized access' or 'suspicious withdrawals' on the user's account," SlowMist said. "They create a sense of urgency, then follow up with phishing emails or texts containing fake ticket numbers or 'recovery links.'" Delta Can Sue CrowdStrike Over July 2024 Mega Outage — Delta Air Lines, which had its systems crippled and almost 7,000 flights canceled in the wake of a massive outage caused by a faulty update issued by CrowdStrike in mid-July 2024, has been given the green light to pursue to its lawsuit against the cybersecurity company. A judge in the U.S. state of Georgia stating Delta can try to prove that CrowdStrike was grossly negligent by pushing a defective update to its Falcon software to customers. The update crashed 8.5 million Windows devices across the world. Crowdstrike previously claimed that the airline had rejected technical support offers both from itself and Microsoft. In a statement shared with Reuters, lawyers representing CrowdStrike said they were "confident the judge will find Delta's case has no merit, or will limit damages to the 'single-digit millions of dollars' under Georgia law." The development comes months after MGM Resorts International agreed to pay million to settle multiple class-action lawsuits related to a data breach in 2019 and a ransomware attack the company experienced in 2023. Storm-1516 Uses AI-Generated Media to Spread Disinformation — The Russian influence operation known as Storm-1516sought to spread narratives that undermined the European support for Ukraine by amplifying fabricated stories on X about European leaders using drugs while traveling by train to Kyiv for peace talks. One of the posts was subsequently shared by Russian state media and Maria Zakharova, a senior official in Russia's foreign ministry, as part of what has been described as a coordinated disinformation campaign by EclecticIQ. The activity is also notable for the use of synthetic content depicting French President Emmanuel Macron, U.K. Labour Party leader Keir Starmer, and German chancellor Friedrich Merz of drug possession during their return from Ukraine. "By attacking the reputation of these leaders, the campaign likely aimed to turn their own voters against them, using influence operationsto reduce public support for Ukraine by discrediting the politicians who back it," the Dutch threat intelligence firm said. Turkish Users Targeted by DBatLoader — AhnLab has disclosed details of a malware campaign that's distributing a malware loader called DBatLoadervia banking-themed banking emails, which then acts as a conduit to deliver SnakeKeylogger, an information stealer developed in .NET. "The DBatLoader malware distributed through phishing emails has the cunning behavior of exploiting normal processesthrough techniques such as DLL side-loading and injection for most of its behaviors, and it also utilizes normal processesfor behaviors such as file copying and changing policies," the company said. SEC SIM-Swapper Sentenced to 14 Months for SEC X Account Hack — A 26-year-old Alabama man, Eric Council Jr., has been sentenced to 14 months in prison and three years of supervised release for using SIM swapping attacks to breach the U.S. Securities and Exchange Commission'sofficial X account in January 2024 and falsely announced that the SEC approved BitcoinExchange Traded Funds. Council Jr.was arrested in October 2024 and pleaded guilty to the crime earlier this February. He has also been ordered to forfeit According to court documents, Council used his personal computer to search incriminating phrases such as "SECGOV hack," "telegram sim swap," "how can I know for sure if I am being investigated by the FBI," "What are the signs that you are under investigation by law enforcement or the FBI even if you have not been contacted by them," "what are some signs that the FBI is after you," "Verizon store list," "federal identity theft statute," and "how long does it take to delete telegram account." FBI Warns of Malicious Campaign Impersonating Government Officials — The U.S. Federal Bureau of Investigationis warning of a new campaign that involves malicious actors impersonating senior U.S. federal or state government officials and their contacts to target individuals since April 2025. "The malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts," the FBI said. "One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform." From there, the actor may present malware or introduce hyperlinks that lead intended targets to an actor-controlled site that steals login information. DICOM Flaw Enables Attackers to Embed Malicious Code Within Medical Image Files — Praetorian has released a proof-of-conceptfor a high-severity security flaw in Digital Imaging and Communications in Medicine, predominant file format for medical images, that enables attackers to embed malicious code within legitimate medical image files. CVE-2019-11687, originally disclosed in 2019 by Markel Picado Ortiz, stems from a design decision that allows arbitrary content at the start of the file, otherwise called the Preamble, which enables the creation of malicious polyglots. Codenamed ELFDICOM, the PoC extends the attack surface to Linux environments, making it a much more potent threat. As mitigations, it's advised to implement a DICOM preamble whitelist. "DICOM's file structure inherently allows arbitrary bytes at the beginning of the file, where Linux and most operating systems will look for magic bytes," Praetorian researcher Ryan Hennessee said. "would check a DICOM file's preamble before it is imported into the system. This would allow known good patterns, such as 'TIFF' magic bytes, or '\x00' null bytes, while files with the ELF magic bytes would be blocked." Cookie-Bite Attack Uses Chrome Extension to Steal Session Tokens — Cybersecurity researchers have demonstrated a new attack technique called Cookie-Bite that employs custom-made malicious browser extensions to steal "ESTAUTH" and "ESTSAUTHPERSISTNT" cookies in Microsoft Azure Entra ID and bypass multi-factor authentication. The attack has multiple moving parts to it: A custom Chrome extension that monitors authentication events and captures cookies; a PowerShell script that automates the extension deployment and ensures persistence; an exfiltration mechanism to send the cookies to a remote collection point; and a complementary extension to inject the captured cookies into the attacker's browser. "Threat actors often use infostealers to extract authentication tokens directly from a victim's machine or buy them directly through darkness markets, allowing adversaries to hijack active cloud sessions without triggering MFA," Varonis said. "By injecting these cookies while mimicking the victim's OS, browser, and network, attackers can evade Conditional Access Policiesand maintain persistent access." Authentication cookies can also be stolen using adversary-in-the-middlephishing kits in real-time, or using rogue browser extensions that request excessive permissions to interact with web sessions, modify page content, and extract stored authentication data. Once installed, the extension can access the browser's storage API, intercept network requests, or inject malicious JavaScript into active sessions to harvest real-time session cookies. "By leveraging stolen session cookies, an adversary can bypass authentication mechanisms, gaining seamless entry into cloud environments without requiring user credentials," Varonis said. "Beyond initial access, session hijacking can facilitate lateral movement across the tenant, allowing attackers to explore additional resources, access sensitive data, and escalate privileges by abusing existing permissions or misconfigured roles." 🎥 Cybersecurity Webinars Non-Human Identities: The AI Backdoor You're Not Watching → AI agents rely on Non-Human Identitiesto function—but these are often left untracked and unsecured. As attackers shift focus to this hidden layer, the risk is growing fast. In this session, you'll learn how to find, secure, and monitor these identities before they're exploited. Join the webinar to understand the real risks behind AI adoption—and how to stay ahead. Inside the LOTS Playbook: How Hackers Stay Undetected → Attackers are using trusted sites to stay hidden. In this webinar, Zscaler experts share how they detect these stealthy LOTS attacks using insights from the world's largest security cloud. Join to learn how to spot hidden threats and improve your defense. 🔧 Cybersecurity Tools ScriptSentry → It is a free tool that scans your environment for dangerous logon script misconfigurations—like plaintext credentials, insecure file/share permissions, and references to non-existent servers. These overlooked issues can enable lateral movement, privilege escalation, or even credential theft. ScriptSentry helps you quickly identify and fix them across large Active Directory environments. Aftermath → It is a Swift-based, open-source tool for macOS incident response. It collects forensic data—like logs, browser activity, and process info—from compromised systems, then analyzes it to build timelines and track infection paths. Deploy via MDM or run manually. Fast, lightweight, and ideal for post-incident investigation. AI Red Teaming Playground Labs → It is an open-source training suite with hands-on challenges designed to teach security professionals how to red team AI systems. Originally developed for Black Hat USA 2024, the labs cover prompt injections, safety bypasses, indirect attacks, and Responsible AI failures. Built on Chat Copilot and deployable via Docker, it's a practical resource for testing and understanding real-world AI vulnerabilities. 🔒 Tip of the Week Review and Revoke Old OAuth App Permissions — They're Silent Backdoor → You've likely logged into apps using "Continue with Google," "Sign in with Microsoft," or GitHub/Twitter/Facebook logins. That's OAuth. But did you know many of those apps still have access to your data long after you stop using them? Why it matters: Even if you delete the app or forget it existed, it might still have ongoing access to your calendar, email, cloud files, or contact list — no password needed. If that third-party gets breached, your data is at risk. What to do: Go through your connected apps here: Google: myaccount.google.com/permissions Microsoft: account.live.com/consent/Manage GitHub: github.com/settings/applications Facebook: facebook.com/settings?tab=applications Revoke anything you don't actively use. It's a fast, silent cleanup — and it closes doors you didn't know were open. Conclusion Looking ahead, it's not just about tracking threats—it's about understanding what they reveal. Every tactic used, every system tested, points to deeper issues in how trust, access, and visibility are managed. As attackers adapt quickly, defenders need sharper awareness and faster response loops. The takeaways from this week aren't just technical—they speak to how teams prioritize risk, design safeguards, and make choices under pressure. Use these insights not just to react, but to rethink what "secure" really needs to mean in today's environment. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. #weekly #recap #apt #campaigns #browser
    ⚡ Weekly Recap: APT Campaigns, Browser Hijacks, AI Malware, Cloud Breaches and Critical CVEs
    thehackernews.com
    Cyber threats don't show up one at a time anymore. They're layered, planned, and often stay hidden until it's too late. For cybersecurity teams, the key isn't just reacting to alerts—it's spotting early signs of trouble before they become real threats. This update is designed to deliver clear, accurate insights based on real patterns and changes we can verify. With today's complex systems, we need focused analysis—not noise. What you'll see here isn't just a list of incidents, but a clear look at where control is being gained, lost, or quietly tested. ⚡ Threat of the Week Lumma Stealer, DanaBot Operations Disrupted — A coalition of private sector companies and law enforcement agencies have taken down the infrastructure associated with Lumma Stealer and DanaBot. Charges have also been unsealed against 16 individuals for their alleged involvement in the development and deployment of DanaBot. The malware is equipped to siphon data from victim computers, hijack banking sessions, and steal device information. More uniquely, though, DanaBot has also been used for hacking campaigns that appear to be linked to Russian state-sponsored interests. All of that makes DanaBot a particularly clear example of how commodity malware has been repurposed by Russian state hackers for their own goals. In tandem, about 2,300 domains that acted as the command-and-control (C2) backbone for the Lumma information stealer have been seized, alongside taking down 300 servers and neutralizing 650 domains that were used to launch ransomware attacks. The actions against international cybercrime in the past few days constituted the latest phase of Operation Endgame. Get the Guide ➝ 🔔 Top News Threat Actors Use TikTok Videos to Distribute Stealers — While ClickFix has become a popular social engineering tactic to deliver malware, threat actors have been observed using artificial intelligence (AI)-generated videos uploaded to TikTok to deceive users into running malicious commands on their systems and deploy malware like Vidar and StealC under the guise of activating pirated version of Windows, Microsoft Office, CapCut, and Spotify. "This campaign highlights how attackers are ready to weaponize whichever social media platforms are currently popular to distribute malware," Trend Micro said. APT28 Hackers Target Western Logistics and Tech Firms — Several cybersecurity and intelligence agencies from Australia, Europe, and the United States issued a joint alert warning of a state-sponsored campaign orchestrated by the Russian state-sponsored threat actor APT28 targeting Western logistics entities and technology companies since 2022. "This cyber espionage-oriented campaign targeting logistics entities and technology companies uses a mix of previously disclosed TTPs and is likely connected to these actors' wide scale targeting of IP cameras in Ukraine and bordering NATO nations," the agencies said. The attacks are designed to steal sensitive information and maintain long-term persistence on compromised hosts. Chinese Threat Actors Exploit Ivanti EPMM Flaws — The China-nexus cyber espionage group tracked as UNC5221 has been attributed to the exploitation of a pair of security flaws affecting Ivanti Endpoint Manager Mobile (EPMM) software (CVE-2025-4427 and CVE-2025-4428) to target a wide range of sectors across Europe, North America, and the Asia-Pacific region. The intrusions leverage the vulnerabilities to obtain a reverse shell and drop malicious payloads like KrustyLoader, which is known to deliver the Sliver command-and-control (C2) framework. "UNC5221 demonstrates a deep understanding of EPMM's internal architecture, repurposing legitimate system components for covert data exfiltration," EclecticIQ said. "Given EPMM's role in managing and pushing configurations to enterprise mobile devices, a successful exploitation could allow threat actors to remotely access, manipulate, or compromise thousands of managed devices across an organization." Over 100 Google Chrome Extensions Mimic Popular Tools — An unknown threat actor has been attributed to creating several malicious Chrome Browser extensions since February 2024 that masquerade as seemingly benign utilities such as DeepSeek, Manus, DeBank, FortiVPN, and Site Stats but incorporate covert functionality to exfiltrate data, receive commands, and execute arbitrary code. Links to these browser add-ons are hosted on specially crafted sites to which users are likely redirected to via phishing and social media posts. While the extensions appear to offer the advertised features, they also stealthily facilitate credential and cookie theft, session hijacking, ad injection, malicious redirects, traffic manipulation, and phishing via DOM manipulation. Several of these extensions have been taken down by Google. CISA Warns of SaaS Providers of Attacks Targeting Cloud Environments — The U.S. Cybersecurity and Infrastructure Security Agency (CISA) warned that SaaS companies are under threat from bad actors who are on the prowl for cloud applications with default configurations and elevated permissions. While the agency did not attribute the activity to a specific group, the advisory said enterprise backup platform Commvault is monitoring cyber threat activity targeting applications hosted in their Microsoft Azure cloud environment. "Threat actors may have accessed client secrets for Commvault's (Metallic) Microsoft 365 (M365) backup software-as-a-service (SaaS) solution, hosted in Azure," CISA said. "This provided the threat actors with unauthorized access to Commvault's customers' M365 environments that have application secrets stored by Commvault." GitLab AI Coding Assistant Flaws Could Be Used to Inject Malicious Code — Cybersecurity researchers have discovered an indirect prompt injection flaw in GitLab's artificial intelligence (AI) assistant Duo that could have allowed attackers to steal source code and inject untrusted HTML into its responses, which could then be used to direct victims to malicious websites. The attack could also leak confidential issue data, such as zero-day vulnerability details. All that's required is for the attacker to instruct the chatbot to interact with a merge request (or commit, issue, or source code) by taking advantage of the fact that GitLab Duo has extensive access to the platform. "By embedding hidden instructions in seemingly harmless project content, we were able to manipulate Duo's behavior, exfiltrate private source code, and demonstrate how AI responses can be leveraged for unintended and harmful outcomes," Legit Security said. One variation of the attack involved hiding a malicious instruction in an otherwise legitimate piece of source code, while another exploited Duo's parsing of markdown responses in real-time asynchronously. An attacker could leverage this behavior – that Duo begins rendering the output line by line rather than waiting until the entire response is generated and sending it all at once – to introduce malicious HTML code that can access sensitive data and exfiltrate the information to a remote server. The issues have been patched by GitLab following responsible disclosure. ‎️‍🔥 Trending CVEs Software vulnerabilities remain one of the simplest—and most effective—entry points for attackers. Each week uncovers new flaws, and even small delays in patching can escalate into serious security incidents. Staying ahead means acting fast. Below is this week's list of high-risk vulnerabilities that demand attention. Review them carefully, apply updates without delay, and close the doors before they're forced open. This week's list includes — CVE-2025-34025, CVE-2025-34026, CVE-2025-34027 (Versa Concerto), CVE-2025-30911 (RomethemeKit For Elementor WordPress plugin), CVE-2024-57273, CVE-2024-54780, and CVE-2024-54779 (pfSense), CVE-2025-41229 (VMware Cloud Foundation), CVE-2025-4322 (Motors WordPress theme), CVE-2025-47934 (OpenPGP.js), CVE-2025-30193 (PowerDNS), CVE-2025-0993 (GitLab), CVE-2025-36535 (AutomationDirect MB-Gateway), CVE-2025-47949 (Samlify), CVE-2025-40775 (BIND DNS), CVE-2025-20152 (Cisco Identity Services Engine), CVE-2025-4123 (Grafana), CVE-2025-5063 (Google Chrome), CVE-2025-37899 (Linux Kernel), CVE-2025-26817 (Netwrix Password Secure), CVE-2025-47947 (ModSecurity), CVE-2025-3078, CVE-2025-3079 (Canon Printers), and CVE-2025-4978 (NETGEAR). 📰 Around the Cyber World Sandworm Drops New Wiper in Ukraine — The Russia-aligned Sandworm group intensified destructive operations against Ukrainian energy companies, deploying a new wiper named ZEROLOT. "The infamous Sandworm group concentrated heavily on compromising Ukrainian energy infrastructure. In recent cases, it deployed the ZEROLOT wiper in Ukraine. For this, the attackers abused Active Directory Group Policy in the affected organizations," ESET Director of Threat Research, Jean-Ian Boutin, said. Another Russian hacking group, Gamaredon, remained the most prolific actor targeting the East European nation, enhancing malware obfuscation and introducing PteroBox, a file stealer leveraging Dropbox. Signal Says No to Recall — Signal has released a new version of its messaging app for Windows that, by default, blocks the ability of Windows to use Recall to periodically take screenshots of the app. "Although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that's displayed within privacy-preserving apps like Signal at risk," Signal said. "As a result, we are enabling an extra layer of protection by default on Windows 11 in order to help maintain the security of Signal Desktop on that platform even though it introduces some usability trade-offs. Microsoft has simply given us no other option." Microsoft began officially rolling out Recall last month. Russia Introduces New Law to Track Foreigners Using Their Smartphones — The Russian government has introduced a new law that makes installing a tracking app mandatory for all foreign nationals in the Moscow region. This includes gathering their real-time locations, fingerprint, face photograph, and residential information. "The adopted mechanism will allow, using modern technologies, to strengthen control in the field of migration and will also contribute to reducing the number of violations and crimes in this area," Vyacheslav Volodin, chairman of the State Duma, said. "If migrants change their actual place of residence, they will be required to inform the Ministry of Internal Affairs (MVD) within three working days." A proposed four-year trial period begins on September 1, 2025, and runs until September 1, 2029. Dutch Government Passes Law to Criminalize Cyber Espionage — The Dutch government has approved a law criminalizing a wide range of espionage activities, including digital espionage, in an effort to protect national security, critical infrastructure, and high-quality technologies. Under the amended law, leaking sensitive information that is not classified as a state secret or engaging in activities on behalf of a foreign government that harm Dutch interests can also result in criminal charges. "Foreign governments are also interested in non-state-secret, sensitive information about a particular economic sector or about political decision-making," the government said. "Such information can be used to influence political processes, weaken the Dutch economy or play allies against each other. Espionage can also involve actions other than sharing information." Microsoft Announces Availability of Quantum-Resistant Algorithms to SymCrypt — Microsoft has revealed that it's making post-quantum cryptography (PQC) capabilities, including ML-KEM and ML-DSA, available for Windows Insiders, Canary Channel Build 27852 and higher, and Linux, SymCrypt-OpenSSL version 1.9.0. "This advancement will enable customers to commence their exploration and experimentation of PQC within their operational environments," Microsoft said. "By obtaining early access to PQC capabilities, organizations can proactively assess the compatibility, performance, and integration of these novel algorithms alongside their existing security infrastructure." New Malware DOUBLELOADER Uses ALCATRAZ for Obfuscation — The open-source obfuscator ALCATRAZ has been seen within a new generic loader dubbed DOUBLELOADER, which has been deployed alongside Rhadamanthys Stealer infections starting December 2024. The malware collects host information, requests an updated version of itself, and starts beaconing to a hardcoded IP address (185.147.125[.]81) stored within the binary. "Obfuscators such as ALCATRAZ end up increasing the complexity when triaging malware," Elastic Security Labs said. "Its main goal is to hinder binary analysis tools and increase the time of the reverse engineering process through different techniques; such as hiding the control flow or making decompilation hard to follow." New Formjacking Campaign Targets WooCommerce Sites — Cybersecurity researchers have detected a sophisticated formjacking campaign targeting WooCommerce sites. The malware, per Wordfence, injects a fake but professional-looking payment form into legitimate checkout processes and exfiltrates sensitive customer data to an external server. Further analysis has revealed that the infection likely originated from a compromised WordPress admin account, which was used to inject malicious JavaScript via a Simple Custom CSS and JS plugin (or something similar) that allows administrators to add custom code. "Unlike traditional card skimmers that simply overlay existing forms, this variant carefully integrates with the WooCommerce site's design and payment workflow, making it particularly difficult for site owners and users to detect," the WordPress security company said. "The malware author repurposed the browser's localStorage mechanism – typically used by websites to remember user preferences – to silently store stolen data and maintain access even after page reloads or when navigating away from the checkout page." E.U. Sanctions Stark Industries — The European Union (E.U.) has announced sanctions against 21 individuals and six entities in Russia over its "destabilising actions" in the region. One of the sanctioned entities is Stark Industries, a bulletproof hosting provider that has been accused of acting as "enablers of various Russian state-sponsored and affiliated actors to conduct destabilising activities including, information manipulation interference and cyber attacks against the Union and third countries." The sanctions also target its CEO Iurie Neculiti and owner Ivan Neculiti. Stark Industries was previously spotlighted by independent cybersecurity journalist Brian Krebs, detailing its use in DDoS attacks in Ukraine and across Europe. In August 2024, Team Cymru said it discovered 25 Stark-assigned IP addresses used to host domains associated with FIN7 activities and that it had been working with Stark Industries for several months to identify and reduce abuse of their systems. The sanctions have also targeted Kremlin-backed manufacturers of drones and radio communication equipment used by the Russian military, as well as those involved in GPS signal jamming in Baltic states and disrupting civil aviation. The Mask APT Unmasked as Tied to the Spanish Government — The mysterious threat actor known as The Mask (aka Careto) has been identified as run by the Spanish government, according to a report published by TechCrunch, citing people who worked at Kaspersky at the time and had knowledge of the investigation. The Russian cybersecurity company first exposed the hacking group in 2014, linking it to highly sophisticated attacks since at least 2007 targeting high-profile organizations, such as governments, diplomatic entities, and research institutions. A majority of the group's attacks have targeted Cuba, followed by hundreds of victims in Brazil, Morocco, Spain, and Gibraltar. While Kaspersky has not publicly attributed it to a specific country, the latest revelation makes The Mask one of the few Western government hacking groups that has ever been discussed in public. This includes the Equation Group, the Lamberts (the U.S.), and Animal Farm (France). Social Engineering Scams Target Coinbase Users — Earlier this month, cryptocurrency exchange Coinbase revealed that it was the victim of a malicious attack perpetrated by unknown threat actors to breach its systems by bribing customer support agents in India and siphon funds from nearly 70,000 customers. According to Blockchain security firm SlowMist, Coinbase users have been the target of social engineering scams since the start of the year, bombarding with SMS messages claiming to be fake withdrawal requests and seeking their confirmation as part of a "sustained and organized scam campaign." The goal is to induce a false sense of urgency and trick them into calling a number, eventually convincing them to transfer the funds to a secure wallet with a seed phrase pre-generated by the attackers and ultimately drain the assets. It's assessed that the activities are primarily carried out by two groups: low-level skid attackers from the Com community and organized cybercrime groups based in India. "Using spoofed PBX phone systems, scammers impersonate Coinbase support and claim there's been 'unauthorized access' or 'suspicious withdrawals' on the user's account," SlowMist said. "They create a sense of urgency, then follow up with phishing emails or texts containing fake ticket numbers or 'recovery links.'" Delta Can Sue CrowdStrike Over July 2024 Mega Outage — Delta Air Lines, which had its systems crippled and almost 7,000 flights canceled in the wake of a massive outage caused by a faulty update issued by CrowdStrike in mid-July 2024, has been given the green light to pursue to its lawsuit against the cybersecurity company. A judge in the U.S. state of Georgia stating Delta can try to prove that CrowdStrike was grossly negligent by pushing a defective update to its Falcon software to customers. The update crashed 8.5 million Windows devices across the world. Crowdstrike previously claimed that the airline had rejected technical support offers both from itself and Microsoft. In a statement shared with Reuters, lawyers representing CrowdStrike said they were "confident the judge will find Delta's case has no merit, or will limit damages to the 'single-digit millions of dollars' under Georgia law." The development comes months after MGM Resorts International agreed to pay $45 million to settle multiple class-action lawsuits related to a data breach in 2019 and a ransomware attack the company experienced in 2023. Storm-1516 Uses AI-Generated Media to Spread Disinformation — The Russian influence operation known as Storm-1516 (aka CopyCop) sought to spread narratives that undermined the European support for Ukraine by amplifying fabricated stories on X about European leaders using drugs while traveling by train to Kyiv for peace talks. One of the posts was subsequently shared by Russian state media and Maria Zakharova, a senior official in Russia's foreign ministry, as part of what has been described as a coordinated disinformation campaign by EclecticIQ. The activity is also notable for the use of synthetic content depicting French President Emmanuel Macron, U.K. Labour Party leader Keir Starmer, and German chancellor Friedrich Merz of drug possession during their return from Ukraine. "By attacking the reputation of these leaders, the campaign likely aimed to turn their own voters against them, using influence operations (IO) to reduce public support for Ukraine by discrediting the politicians who back it," the Dutch threat intelligence firm said. Turkish Users Targeted by DBatLoader — AhnLab has disclosed details of a malware campaign that's distributing a malware loader called DBatLoader (aka ModiLoader) via banking-themed banking emails, which then acts as a conduit to deliver SnakeKeylogger, an information stealer developed in .NET. "The DBatLoader malware distributed through phishing emails has the cunning behavior of exploiting normal processes (easinvoker.exe, loader.exe) through techniques such as DLL side-loading and injection for most of its behaviors, and it also utilizes normal processes (cmd.exe, powershell.exe, esentutl.exe, extrac32.exe) for behaviors such as file copying and changing policies," the company said. SEC SIM-Swapper Sentenced to 14 Months for SEC X Account Hack — A 26-year-old Alabama man, Eric Council Jr., has been sentenced to 14 months in prison and three years of supervised release for using SIM swapping attacks to breach the U.S. Securities and Exchange Commission's (SEC) official X account in January 2024 and falsely announced that the SEC approved Bitcoin (BTC) Exchange Traded Funds (ETFs). Council Jr. (aka Ronin, Agiantschnauzer, and @EasyMunny) was arrested in October 2024 and pleaded guilty to the crime earlier this February. He has also been ordered to forfeit $50,000. According to court documents, Council used his personal computer to search incriminating phrases such as "SECGOV hack," "telegram sim swap," "how can I know for sure if I am being investigated by the FBI," "What are the signs that you are under investigation by law enforcement or the FBI even if you have not been contacted by them," "what are some signs that the FBI is after you," "Verizon store list," "federal identity theft statute," and "how long does it take to delete telegram account." FBI Warns of Malicious Campaign Impersonating Government Officials — The U.S. Federal Bureau of Investigation (FBI) is warning of a new campaign that involves malicious actors impersonating senior U.S. federal or state government officials and their contacts to target individuals since April 2025. "The malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts," the FBI said. "One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform." From there, the actor may present malware or introduce hyperlinks that lead intended targets to an actor-controlled site that steals login information. DICOM Flaw Enables Attackers to Embed Malicious Code Within Medical Image Files — Praetorian has released a proof-of-concept (PoC) for a high-severity security flaw in Digital Imaging and Communications in Medicine (DICOM), predominant file format for medical images, that enables attackers to embed malicious code within legitimate medical image files. CVE-2019-11687 (CVSS score: 7.8), originally disclosed in 2019 by Markel Picado Ortiz, stems from a design decision that allows arbitrary content at the start of the file, otherwise called the Preamble, which enables the creation of malicious polyglots. Codenamed ELFDICOM, the PoC extends the attack surface to Linux environments, making it a much more potent threat. As mitigations, it's advised to implement a DICOM preamble whitelist. "DICOM's file structure inherently allows arbitrary bytes at the beginning of the file, where Linux and most operating systems will look for magic bytes," Praetorian researcher Ryan Hennessee said. "[The whitelist] would check a DICOM file's preamble before it is imported into the system. This would allow known good patterns, such as 'TIFF' magic bytes, or '\x00' null bytes, while files with the ELF magic bytes would be blocked." Cookie-Bite Attack Uses Chrome Extension to Steal Session Tokens — Cybersecurity researchers have demonstrated a new attack technique called Cookie-Bite that employs custom-made malicious browser extensions to steal "ESTAUTH" and "ESTSAUTHPERSISTNT" cookies in Microsoft Azure Entra ID and bypass multi-factor authentication (MFA). The attack has multiple moving parts to it: A custom Chrome extension that monitors authentication events and captures cookies; a PowerShell script that automates the extension deployment and ensures persistence; an exfiltration mechanism to send the cookies to a remote collection point; and a complementary extension to inject the captured cookies into the attacker's browser. "Threat actors often use infostealers to extract authentication tokens directly from a victim's machine or buy them directly through darkness markets, allowing adversaries to hijack active cloud sessions without triggering MFA," Varonis said. "By injecting these cookies while mimicking the victim's OS, browser, and network, attackers can evade Conditional Access Policies (CAPs) and maintain persistent access." Authentication cookies can also be stolen using adversary-in-the-middle (AitM) phishing kits in real-time, or using rogue browser extensions that request excessive permissions to interact with web sessions, modify page content, and extract stored authentication data. Once installed, the extension can access the browser's storage API, intercept network requests, or inject malicious JavaScript into active sessions to harvest real-time session cookies. "By leveraging stolen session cookies, an adversary can bypass authentication mechanisms, gaining seamless entry into cloud environments without requiring user credentials," Varonis said. "Beyond initial access, session hijacking can facilitate lateral movement across the tenant, allowing attackers to explore additional resources, access sensitive data, and escalate privileges by abusing existing permissions or misconfigured roles." 🎥 Cybersecurity Webinars Non-Human Identities: The AI Backdoor You're Not Watching → AI agents rely on Non-Human Identities (like service accounts and API keys) to function—but these are often left untracked and unsecured. As attackers shift focus to this hidden layer, the risk is growing fast. In this session, you'll learn how to find, secure, and monitor these identities before they're exploited. Join the webinar to understand the real risks behind AI adoption—and how to stay ahead. Inside the LOTS Playbook: How Hackers Stay Undetected → Attackers are using trusted sites to stay hidden. In this webinar, Zscaler experts share how they detect these stealthy LOTS attacks using insights from the world's largest security cloud. Join to learn how to spot hidden threats and improve your defense. 🔧 Cybersecurity Tools ScriptSentry → It is a free tool that scans your environment for dangerous logon script misconfigurations—like plaintext credentials, insecure file/share permissions, and references to non-existent servers. These overlooked issues can enable lateral movement, privilege escalation, or even credential theft. ScriptSentry helps you quickly identify and fix them across large Active Directory environments. Aftermath → It is a Swift-based, open-source tool for macOS incident response. It collects forensic data—like logs, browser activity, and process info—from compromised systems, then analyzes it to build timelines and track infection paths. Deploy via MDM or run manually. Fast, lightweight, and ideal for post-incident investigation. AI Red Teaming Playground Labs → It is an open-source training suite with hands-on challenges designed to teach security professionals how to red team AI systems. Originally developed for Black Hat USA 2024, the labs cover prompt injections, safety bypasses, indirect attacks, and Responsible AI failures. Built on Chat Copilot and deployable via Docker, it's a practical resource for testing and understanding real-world AI vulnerabilities. 🔒 Tip of the Week Review and Revoke Old OAuth App Permissions — They're Silent Backdoor → You've likely logged into apps using "Continue with Google," "Sign in with Microsoft," or GitHub/Twitter/Facebook logins. That's OAuth. But did you know many of those apps still have access to your data long after you stop using them? Why it matters: Even if you delete the app or forget it existed, it might still have ongoing access to your calendar, email, cloud files, or contact list — no password needed. If that third-party gets breached, your data is at risk. What to do: Go through your connected apps here: Google: myaccount.google.com/permissions Microsoft: account.live.com/consent/Manage GitHub: github.com/settings/applications Facebook: facebook.com/settings?tab=applications Revoke anything you don't actively use. It's a fast, silent cleanup — and it closes doors you didn't know were open. Conclusion Looking ahead, it's not just about tracking threats—it's about understanding what they reveal. Every tactic used, every system tested, points to deeper issues in how trust, access, and visibility are managed. As attackers adapt quickly, defenders need sharper awareness and faster response loops. The takeaways from this week aren't just technical—they speak to how teams prioritize risk, design safeguards, and make choices under pressure. Use these insights not just to react, but to rethink what "secure" really needs to mean in today's environment. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    0 Yorumlar ·0 hisse senetleri ·0 önizleme
  • Why a new anti-revenge porn law has free speech experts alarmed 

    Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes. 
    The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim’s takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance. 
    “Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,” India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch.
    Online platforms have one year to establish a process for removing nonconsensual intimate imagery. While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse.
    “I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it’s gonna be consensual porn,” McKinney said. 
    Senator Marsha Blackburn, a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that “keeping trans content away from children is protecting kids.” 
    Because of the liability that platforms face if they don’t take down an image within 48 hours of receiving a request, “the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it’s another type of protected speech, or if it’s even relevant to the person who’s making the request,” said McKinney.

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch’s requests for more information about how they’ll verify whether the person requesting a takedown is a victim. 
    Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim. 
    Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn’t “reasonably comply” with takedown demands as committing an “unfair or deceptive act or practice” – even if the host isn’t a commercial entity.
    “This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,” the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement. 
    Proactive monitoring
    McKinney predicts that platforms will start moderating content before it’s disseminated so they have fewer problematic posts to take down in the future. 
    Platforms are already using AI to monitor for harmful content.
    Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material. Some of Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and BeReal. 
    “We were actually one of the tech companies that endorsed that bill,” Guo told TechCrunch. “It’ll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.” 
    Hive’s model is a software-as-a-service, so the startup doesn’t control how platforms use its product to flag or remove content. But Guo said many clients insert Hive’s API at the point of upload to monitor before anything is sent out to the community. 
    A Reddit spokesperson told TechCrunch the platform uses “sophisticated internal tools, processes, and teams to address and remove” NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim. 
    McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn’t include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage. 
    Meta, Signal, and Apple have not responded to TechCrunch’s request for more information on their plans for encrypted messaging.
    Broader free speech implications
    On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law. 
    “And I’m going to use that bill for myself, too, if you don’t mind,” he added. “There’s nobody who gets treated worse than I do online.” 
    While the audience laughed at the comment, not everyone took it as a joke. Trump hasn’t been shy about suppressing or retaliating against unfavorable speech, whether that’s labeling mainstream media outlets “enemies of the people,” barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS.
    On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump’s demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university’s tax-exempt status. 
     “At a time when we’re already seeing school boards try to ban books and we’re seeing certain politicians be very explicitly about the types of content they don’t want people to ever see, whether it’s critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,” McKinney said.
    #why #new #antirevenge #porn #law
    Why a new anti-revenge porn law has free speech experts alarmed 
    Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes.  The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim’s takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance.  “Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,” India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch. Online platforms have one year to establish a process for removing nonconsensual intimate imagery. While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse. “I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it’s gonna be consensual porn,” McKinney said.  Senator Marsha Blackburn, a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that “keeping trans content away from children is protecting kids.”  Because of the liability that platforms face if they don’t take down an image within 48 hours of receiving a request, “the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it’s another type of protected speech, or if it’s even relevant to the person who’s making the request,” said McKinney. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch’s requests for more information about how they’ll verify whether the person requesting a takedown is a victim.  Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim.  Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn’t “reasonably comply” with takedown demands as committing an “unfair or deceptive act or practice” – even if the host isn’t a commercial entity. “This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,” the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement.  Proactive monitoring McKinney predicts that platforms will start moderating content before it’s disseminated so they have fewer problematic posts to take down in the future.  Platforms are already using AI to monitor for harmful content. Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material. Some of Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and BeReal.  “We were actually one of the tech companies that endorsed that bill,” Guo told TechCrunch. “It’ll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.”  Hive’s model is a software-as-a-service, so the startup doesn’t control how platforms use its product to flag or remove content. But Guo said many clients insert Hive’s API at the point of upload to monitor before anything is sent out to the community.  A Reddit spokesperson told TechCrunch the platform uses “sophisticated internal tools, processes, and teams to address and remove” NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim.  McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn’t include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage.  Meta, Signal, and Apple have not responded to TechCrunch’s request for more information on their plans for encrypted messaging. Broader free speech implications On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law.  “And I’m going to use that bill for myself, too, if you don’t mind,” he added. “There’s nobody who gets treated worse than I do online.”  While the audience laughed at the comment, not everyone took it as a joke. Trump hasn’t been shy about suppressing or retaliating against unfavorable speech, whether that’s labeling mainstream media outlets “enemies of the people,” barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS. On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump’s demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university’s tax-exempt status.   “At a time when we’re already seeing school boards try to ban books and we’re seeing certain politicians be very explicitly about the types of content they don’t want people to ever see, whether it’s critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,” McKinney said. #why #new #antirevenge #porn #law
    Why a new anti-revenge porn law has free speech experts alarmed 
    techcrunch.com
    Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes.  The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim’s takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance.  “Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,” India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch. Online platforms have one year to establish a process for removing nonconsensual intimate imagery (NCII). While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse. “I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it’s gonna be consensual porn,” McKinney said.  Senator Marsha Blackburn (R-TN), a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that “keeping trans content away from children is protecting kids.”  Because of the liability that platforms face if they don’t take down an image within 48 hours of receiving a request, “the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it’s another type of protected speech, or if it’s even relevant to the person who’s making the request,” said McKinney. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch’s requests for more information about how they’ll verify whether the person requesting a takedown is a victim.  Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim.  Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn’t “reasonably comply” with takedown demands as committing an “unfair or deceptive act or practice” – even if the host isn’t a commercial entity. “This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,” the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement.  Proactive monitoring McKinney predicts that platforms will start moderating content before it’s disseminated so they have fewer problematic posts to take down in the future.  Platforms are already using AI to monitor for harmful content. Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material (CSAM). Some of Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and BeReal.  “We were actually one of the tech companies that endorsed that bill,” Guo told TechCrunch. “It’ll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.”  Hive’s model is a software-as-a-service, so the startup doesn’t control how platforms use its product to flag or remove content. But Guo said many clients insert Hive’s API at the point of upload to monitor before anything is sent out to the community.  A Reddit spokesperson told TechCrunch the platform uses “sophisticated internal tools, processes, and teams to address and remove” NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim.  McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn’t include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage.  Meta, Signal, and Apple have not responded to TechCrunch’s request for more information on their plans for encrypted messaging. Broader free speech implications On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law.  “And I’m going to use that bill for myself, too, if you don’t mind,” he added. “There’s nobody who gets treated worse than I do online.”  While the audience laughed at the comment, not everyone took it as a joke. Trump hasn’t been shy about suppressing or retaliating against unfavorable speech, whether that’s labeling mainstream media outlets “enemies of the people,” barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS. On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump’s demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university’s tax-exempt status.   “At a time when we’re already seeing school boards try to ban books and we’re seeing certain politicians be very explicitly about the types of content they don’t want people to ever see, whether it’s critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,” McKinney said.
    0 Yorumlar ·0 hisse senetleri ·0 önizleme
CGShares https://cgshares.com