• Jurassic World Evolution 3 Devs Remove AI-Generated Art After Fans Yell At Them A Lot

    When Jurassic World Evolution 3 was announced earlier this month, many fans were disappointed to learn that Frontier Developments was planning to include AI-generated artwork in the park sim. Now, after some “feedback” from fans, the studio is backing down and removing the AI slop. Read more...
    Jurassic World Evolution 3 Devs Remove AI-Generated Art After Fans Yell At Them A Lot When Jurassic World Evolution 3 was announced earlier this month, many fans were disappointed to learn that Frontier Developments was planning to include AI-generated artwork in the park sim. Now, after some “feedback” from fans, the studio is backing down and removing the AI slop. Read more...
    KOTAKU.COM
    Jurassic World Evolution 3 Devs Remove AI-Generated Art After Fans Yell At Them A Lot
    When Jurassic World Evolution 3 was announced earlier this month, many fans were disappointed to learn that Frontier Developments was planning to include AI-generated artwork in the park sim. Now, after some “feedback” from fans, the studio is backin
    2 Yorumlar 0 hisse senetleri
  • BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4

    By TREVOR HOGG
    Images courtesy of Prime Video.

    For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”

    When Splintersplits in two, the cloning effect was inspired by cellular mitosis.

    “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”
    —Stephan Fleet, VFX Supervisor

    A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”

    Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed.

    Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.”

    “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.”
    —Stephan Fleet, VFX Supervisor

    The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.”

    A building is replaced by a massive crowd attending a rally being held by Homelander.

    In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.”

    In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around.

    “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”
    —Stephan Fleet, VFX Supervisor

    Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”

    The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes.

    Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.”

    Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination.

    Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.”

    “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”
    —Stephan Fleet, VFX Supervisor

    Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution.

    A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.”

    Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4.

    When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    #bouncing #rubber #duckies #flying #sheep
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splintersplits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.” #bouncing #rubber #duckies #flying #sheep
    WWW.VFXVOICE.COM
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splinter (Rob Benedict) splits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith [Previs Director], who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be [from Season 3], so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a human [is] you tend to want to give it human gestures and eyebrows. Erik Kripke [Creator, Executive Producer, Showrunner, Director, Writer] said, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep was [acting in a certain way] in one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr. (Simon Pegg) develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelander (Anthony Starr) breaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchie (Tomer Capone) hallucinates as Kimiko Miyashiro (Karen Fukuhara) goes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splinter (Rob Benedict) splits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker (Valorie Curry). “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    0 Yorumlar 0 hisse senetleri
  • Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data

    Jun 16, 2025Ravie LakshmananMalware / DevOps

    Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others.
    The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions."
    The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week.
    Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload.
    Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer.

    The stealer malware is equipped to siphon a wide range of data from infected machines. This includes -

    JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers
    Pod sandbox environment authentication tokens and git information
    CI/CD information from environment variables
    Zscaler host configuration
    Amazon Web Services account information and tokens
    Public IP address
    General platform, user, and host information

    The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems.
    The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis.
    "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said.

    "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity."
    The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below -

    eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry.
    SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown.
    "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said.
    Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed.
    "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work."
    Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server.
    This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB.
    "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL."

    Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user.
    The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT.
    "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent."
    Crypto Malware in the Open-Source Supply Chain
    The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem.

    Some of the examples of these packages include -

    express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys
    bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing.
    lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers

    "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said.
    "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets."
    AI and Slopsquatting
    The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks.
    Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences.

    Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting.
    "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said.
    "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases."

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #malicious #pypi #package #masquerades #chimera
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server. This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB. "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #malicious #pypi #package #masquerades #chimera
    THEHACKERNEWS.COM
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Index (PyPI) repository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development of [machine learning] solutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithm (DGA) in order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compat (676 Downloads) ts-runtime-compat-check (1,588 Downloads) solders (983 Downloads) @mediawave/lib (386 Downloads) All the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former package ("proxy.eslint-proxy[.]site") to retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server ("firewall[.]tel"). This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domain ("cdn.audiowave[.]org") and configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB ("i.ibb[.]co"). "[The DLL] is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account control (UAC) using a combination of FodHelper.exe and programmatic identifiers (ProgIDs) to evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence (AI)-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language models (LLMs) can hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol (MCP)-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    Like
    Love
    Wow
    Sad
    Angry
    514
    2 Yorumlar 0 hisse senetleri
  • Why Designers Get Stuck In The Details And How To Stop

    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar?
    In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap.
    Reason #1 You’re Afraid To Show Rough Work
    We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed.
    I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them.
    The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief.
    The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem.
    So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this:

    Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den.
    Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off.

    Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback.
    Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift:
    Treat early sketches as disposable tools for thinking and actively share them to get feedback faster.

    Reason #2: You Fix The Symptom, Not The Cause
    Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data.
    From my experience, here are several reasons why users might not be clicking that coveted button:

    Users don’t understand that this step is for payment.
    They understand it’s about payment but expect order confirmation first.
    Due to incorrect translation, users don’t understand what the button means.
    Lack of trust signals.
    Unexpected additional coststhat appear at this stage.
    Technical issues.

    Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly.
    Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button.
    Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers.
    There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers.
    Reason #3: You’re Solving The Wrong Problem
    Before solving anything, ask whether the problem even deserves your attention.
    During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons.
    Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned:
    Without the right context, any visual tweak is lipstick on a pig.

    Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising.
    It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours.
    Reason #4: You’re Drowning In Unactionable Feedback
    We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow.
    What matters here are two things:

    The question you ask,
    The context you give.

    That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it.
    For instance:
    “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?”

    Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?”
    Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside.
    I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory.
    So, to wrap up this point, here are two recommendations:

    Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”.
    Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it.

    Reason #5 You’re Just Tired
    Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing.
    A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity.
    What helps here:

    Swap tasks.Trade tickets with another designer; novelty resets your focus.
    Talk to another designer.If NDA permits, ask peers outside the team for a sanity check.
    Step away.Even a ten‑minute walk can do more than a double‑shot espresso.

    By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit.

    And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time.
    Four Steps I Use to Avoid Drowning In Detail
    Knowing these potential traps, here’s the practical process I use to stay on track:
    1. Define the Core Problem & Business Goal
    Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream.
    2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels.
    3. Wireframe the Flow & Get Focused Feedback
    Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions.
    4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution.
    Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering.
    Wrapping Up
    Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution.
    Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    #why #designers #get #stuck #details
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals. Unexpected additional coststhat appear at this stage. Technical issues. Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions. 4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink. #why #designers #get #stuck #details
    SMASHINGMAGAZINE.COM
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychology (like the research by Hewitt and Flett) shows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals (no security icons, unclear seller information). Unexpected additional costs (hidden fees, shipping) that appear at this stage. Technical issues (inactive button, page freezing). Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers (which might come from a fear of speaking up or a desire to avoid challenging authority) — and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B tests (a method of comparing two versions of a design to determine which performs better) showed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem (conversion drop), shared your insight (user confusion), explained your solution (cost breakdown), and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the day (about 70% of cases) compared to late in the day (less than 10%) simply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the Mechanic (Solution Principle) Once the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear context (as discussed in ‘Reason #4’) to get actionable feedback, not just vague opinions. 4. Polish the Visuals (Mindfully) I only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    Like
    Love
    Wow
    Angry
    Sad
    596
    0 Yorumlar 0 hisse senetleri
  • YouTube might slow down your videos if you block ads

    It’s fairly easy to block the constant, incessant advertising that appears on YouTube. Google would prefer that you don’t, or pay upto make them go away. Last weekend, the company started its latest campaign to try and badger ad-block users into disabling their extensions. Since then, it looks like YouTube has escalated things and is now intentionally slowing down videos.
    Posters on Reddit and the Brave browser forum have observed videos being blacked out on first load, approximately for the length of pre-roll ads, with a pop-up link that directs users to the ad-blocking section of this technical support page. “Check whether your browser extensions that block ads are affecting video playback,” suggests Google. “As another option, try opening YouTube in an incognito window with all extensions disabled and check if the issue continues.” PCWorld staff has seen this in action, using uBlock Origin Lite.
    Google
    Ad-block extension developers quickly got around the pop-up issue earlier this week, with one AdGuard representative calling the process “a classic cat-and-mouse game.” But if Google wanted to instigate a more serious crackdown on users blocking ads without paying up, it could do so easily—and we’ve seen it pull this same move before. Posters on the latest issue speculate that the slowdowns might be tagged to specific Google or YouTube user accounts that were detected blocking ads previously, which would bypass any kind of interaction with a specific browser or extension.
    I can’t independently confirm that’s happening, but it wouldn’t surprise me. It also wouldn’t shock me if Google is seeing a larger percentage of YouTube users blocking advertising, as is the case all across the web, as the quantity of advertising rises while quality takes a nosedive. YouTube video creators are having to get, well, creative to seek alternate revenue beyond basic AdSense accounts, as sponsored videos are now constant across the platform and more channels put new videos behind paywalls on YouTube itself or via other platforms like Patreon.

    YouTube is attacking the issue from other angles as well. Tech-focused creators that show how to use third-party tools to block ads or download videos from the siteare getting their videos taken down and their accounts flagged, for violation of the extremely vague policy around “harmful and dangerous content.”
    If I may editorialize a bit: Google, if you want more people to subscribe to YouTube Premium and remove advertising, you need to make it cheaper. Charging per month just to get rid of ads is the same cost of a premium subscription from other sources where users can watch full movies and series. YouTube as a platform is a much lower bar and just doesn’t compete at that level. I’m not going to pay that much to get rid of ads, not when it doesn’t actually get rid of all the ads—those sponsored and subscriber-only videos are still all over the place—and the site is filling up with AI slop. “Premium Lite,” which neuters the offerings for mobile and music-focused users, doesn’t make the cut either.
    And to be clear, I have no problem paying for the stuff I watch. I already pay more than a month to support the individual YouTube channels I enjoy, like Second Wind, Drawfee, and several tech podcasts. But I do it via Patreon because sending that money through YouTube feels gross. If Google wants people to pay up, it needs to lower the price enough so that it’s no longer worth the hassle of blocking them.
    It’s a lesson that the music, movie, and game industries learned a long time ago as they fought the initial wave of internet piracy… and now seem to be forgetting again.
    #youtube #might #slow #down #your
    YouTube might slow down your videos if you block ads
    It’s fairly easy to block the constant, incessant advertising that appears on YouTube. Google would prefer that you don’t, or pay upto make them go away. Last weekend, the company started its latest campaign to try and badger ad-block users into disabling their extensions. Since then, it looks like YouTube has escalated things and is now intentionally slowing down videos. Posters on Reddit and the Brave browser forum have observed videos being blacked out on first load, approximately for the length of pre-roll ads, with a pop-up link that directs users to the ad-blocking section of this technical support page. “Check whether your browser extensions that block ads are affecting video playback,” suggests Google. “As another option, try opening YouTube in an incognito window with all extensions disabled and check if the issue continues.” PCWorld staff has seen this in action, using uBlock Origin Lite. Google Ad-block extension developers quickly got around the pop-up issue earlier this week, with one AdGuard representative calling the process “a classic cat-and-mouse game.” But if Google wanted to instigate a more serious crackdown on users blocking ads without paying up, it could do so easily—and we’ve seen it pull this same move before. Posters on the latest issue speculate that the slowdowns might be tagged to specific Google or YouTube user accounts that were detected blocking ads previously, which would bypass any kind of interaction with a specific browser or extension. I can’t independently confirm that’s happening, but it wouldn’t surprise me. It also wouldn’t shock me if Google is seeing a larger percentage of YouTube users blocking advertising, as is the case all across the web, as the quantity of advertising rises while quality takes a nosedive. YouTube video creators are having to get, well, creative to seek alternate revenue beyond basic AdSense accounts, as sponsored videos are now constant across the platform and more channels put new videos behind paywalls on YouTube itself or via other platforms like Patreon. YouTube is attacking the issue from other angles as well. Tech-focused creators that show how to use third-party tools to block ads or download videos from the siteare getting their videos taken down and their accounts flagged, for violation of the extremely vague policy around “harmful and dangerous content.” If I may editorialize a bit: Google, if you want more people to subscribe to YouTube Premium and remove advertising, you need to make it cheaper. Charging per month just to get rid of ads is the same cost of a premium subscription from other sources where users can watch full movies and series. YouTube as a platform is a much lower bar and just doesn’t compete at that level. I’m not going to pay that much to get rid of ads, not when it doesn’t actually get rid of all the ads—those sponsored and subscriber-only videos are still all over the place—and the site is filling up with AI slop. “Premium Lite,” which neuters the offerings for mobile and music-focused users, doesn’t make the cut either. And to be clear, I have no problem paying for the stuff I watch. I already pay more than a month to support the individual YouTube channels I enjoy, like Second Wind, Drawfee, and several tech podcasts. But I do it via Patreon because sending that money through YouTube feels gross. If Google wants people to pay up, it needs to lower the price enough so that it’s no longer worth the hassle of blocking them. It’s a lesson that the music, movie, and game industries learned a long time ago as they fought the initial wave of internet piracy… and now seem to be forgetting again. #youtube #might #slow #down #your
    WWW.PCWORLD.COM
    YouTube might slow down your videos if you block ads
    It’s fairly easy to block the constant, incessant advertising that appears on YouTube. Google would prefer that you don’t, or pay up (quite a lot) to make them go away. Last weekend, the company started its latest campaign to try and badger ad-block users into disabling their extensions. Since then, it looks like YouTube has escalated things and is now intentionally slowing down videos. Posters on Reddit and the Brave browser forum have observed videos being blacked out on first load, approximately for the length of pre-roll ads, with a pop-up link that directs users to the ad-blocking section of this technical support page. “Check whether your browser extensions that block ads are affecting video playback,” suggests Google. “As another option, try opening YouTube in an incognito window with all extensions disabled and check if the issue continues.” PCWorld staff has seen this in action, using uBlock Origin Lite. Google Ad-block extension developers quickly got around the pop-up issue earlier this week, with one AdGuard representative calling the process “a classic cat-and-mouse game.” But if Google wanted to instigate a more serious crackdown on users blocking ads without paying up, it could do so easily—and we’ve seen it pull this same move before. Posters on the latest issue speculate that the slowdowns might be tagged to specific Google or YouTube user accounts that were detected blocking ads previously, which would bypass any kind of interaction with a specific browser or extension. I can’t independently confirm that’s happening, but it wouldn’t surprise me. It also wouldn’t shock me if Google is seeing a larger percentage of YouTube users blocking advertising, as is the case all across the web, as the quantity of advertising rises while quality takes a nosedive. YouTube video creators are having to get, well, creative to seek alternate revenue beyond basic AdSense accounts, as sponsored videos are now constant across the platform and more channels put new videos behind paywalls on YouTube itself or via other platforms like Patreon. YouTube is attacking the issue from other angles as well. Tech-focused creators that show how to use third-party tools to block ads or download videos from the site (again, without paying the steep fees for YouTube Premium) are getting their videos taken down and their accounts flagged, for violation of the extremely vague policy around “harmful and dangerous content.” If I may editorialize a bit: Google, if you want more people to subscribe to YouTube Premium and remove advertising, you need to make it cheaper. Charging $14 per month just to get rid of ads is the same cost of a premium subscription from other sources where users can watch full movies and series. YouTube as a platform is a much lower bar and just doesn’t compete at that level. I’m not going to pay that much to get rid of ads, not when it doesn’t actually get rid of all the ads—those sponsored and subscriber-only videos are still all over the place—and the site is filling up with AI slop. “Premium Lite,” which neuters the offerings for mobile and music-focused users, doesn’t make the cut either. And to be clear, I have no problem paying for the stuff I watch. I already pay more than $15 a month to support the individual YouTube channels I enjoy, like Second Wind, Drawfee, and several tech podcasts. But I do it via Patreon because sending that money through YouTube feels gross. If Google wants people to pay up, it needs to lower the price enough so that it’s no longer worth the hassle of blocking them. It’s a lesson that the music, movie, and game industries learned a long time ago as they fought the initial wave of internet piracy… and now seem to be forgetting again.
    0 Yorumlar 0 hisse senetleri
  • Will Gamble Architects restores and extends Hertfordshire farmhouse

    The farmhouse, Flint Farm, in North Hertfordshire, was in poor condition with a number of unsympathetic additions that had altered its character over the years.
    Will Gamble Architects was appointed to restore and extend it for a young couple who wanted to transform it into their long-term family home and improve the house’s relationship with its garden and wider farmyard setting.
    While the original brief had been to replace an existing conservatory with a new extension, the practice encouraged the client to extend by integrating an adjacent barn into the envelope of the reworked house, changing the way the property was used.Advertisement

    Existing unsympathetic extensions were removed and the internal layout was reconfigured, with a new linking element added between the barn and farmhouse.
    The series of internal spaces that has been created is designed to retain the character of the historic listed property.
    Architect’s view
    The barn was sensitively restored and converted into an informal living space. Its timber-framed structure was refurbished and left exposed to celebrate the historic fabric of the barn and the craftsmanship of its original construction. A contemporary picture window with parts of the historic timber frame exposed within its reveals frames a view of the garden, as well as the barn’s unique structure.
    The extension, that links both barn and farmhouse, is deliberately contemporary in appearance to ensure that the historic buildings remain legible. It’s low-rise, built into the sloping garden and particularly lightweight in appearance. Floor-to-ceiling glass sits on a plinth of semi-knapped flint, rooting the intervention into the garden. A ribbon of black steel, with shallow peaks and troughs hovers above. The form of this ribbon draws inspiration from the distinctive black timber-clad gables that characterise the farmhouse and the surrounding outbuildings of the old farmstead.
    Internally the addition’s structure is exposed, much like the historic timber framed structure of the farmhouse and the barn. The interiors are tactile, defined by texture and pattern and inspired by the characteristics of the old farmstead.
    Miles Kelsey, associate, Will Gamble ArchitectsAdvertisement

    Client’s view
    We bought the farmhouse as a family home to move out of our two-bed flat in north London.
    Will visited the farmhouse with us whilst we were working through the purchase to understand what we were looking to do and went on to support us through each stage.
    The farmhouse was a combination of the original 16th century timber-framed building that had been added to with unattractive, unusable, and poorly planned extensions that meant the house was completely disconnected from the garden.
    Will and Miles transformed the whole house including moving the front door, converting an adjacent barn and building the modern extension as our kitchen and dining room that makes the best of the garden and views.
    The process that Will and Miles ran was a perfect balance of what we wanted, Sophie’s specific tastes and creativity combined with the benefit of the architects views and what they have done before.
    What really stood out to us was the way they worked with the council during the planning process so we got consent for almost everything we wanted, expressing their own views but ensuring we were always leading the process and the attention to detail during the build stage.
    Overall we are incredibly happy with what Will and Miles helped us create and the way they led us through the whole process.

      Source:Will Gamble Architects

    Project data
    Location North Hertfordshire
    Start on site April 2023
    Completion February 2025
    Gross internal floor area 320m2
    Form of contract or procurement route JCT MW Building Contract. Design-Bid-Build
    Architect Will Gamble Architects
    Client Private
    Structural engineer Axiom Structures
    Principal designer Will Gamble Architects
    Main contractor Elite Construction
    #will #gamble #architects #restores #extends
    Will Gamble Architects restores and extends Hertfordshire farmhouse
    The farmhouse, Flint Farm, in North Hertfordshire, was in poor condition with a number of unsympathetic additions that had altered its character over the years. Will Gamble Architects was appointed to restore and extend it for a young couple who wanted to transform it into their long-term family home and improve the house’s relationship with its garden and wider farmyard setting. While the original brief had been to replace an existing conservatory with a new extension, the practice encouraged the client to extend by integrating an adjacent barn into the envelope of the reworked house, changing the way the property was used.Advertisement Existing unsympathetic extensions were removed and the internal layout was reconfigured, with a new linking element added between the barn and farmhouse. The series of internal spaces that has been created is designed to retain the character of the historic listed property. Architect’s view The barn was sensitively restored and converted into an informal living space. Its timber-framed structure was refurbished and left exposed to celebrate the historic fabric of the barn and the craftsmanship of its original construction. A contemporary picture window with parts of the historic timber frame exposed within its reveals frames a view of the garden, as well as the barn’s unique structure. The extension, that links both barn and farmhouse, is deliberately contemporary in appearance to ensure that the historic buildings remain legible. It’s low-rise, built into the sloping garden and particularly lightweight in appearance. Floor-to-ceiling glass sits on a plinth of semi-knapped flint, rooting the intervention into the garden. A ribbon of black steel, with shallow peaks and troughs hovers above. The form of this ribbon draws inspiration from the distinctive black timber-clad gables that characterise the farmhouse and the surrounding outbuildings of the old farmstead. Internally the addition’s structure is exposed, much like the historic timber framed structure of the farmhouse and the barn. The interiors are tactile, defined by texture and pattern and inspired by the characteristics of the old farmstead. Miles Kelsey, associate, Will Gamble ArchitectsAdvertisement Client’s view We bought the farmhouse as a family home to move out of our two-bed flat in north London. Will visited the farmhouse with us whilst we were working through the purchase to understand what we were looking to do and went on to support us through each stage. The farmhouse was a combination of the original 16th century timber-framed building that had been added to with unattractive, unusable, and poorly planned extensions that meant the house was completely disconnected from the garden. Will and Miles transformed the whole house including moving the front door, converting an adjacent barn and building the modern extension as our kitchen and dining room that makes the best of the garden and views. The process that Will and Miles ran was a perfect balance of what we wanted, Sophie’s specific tastes and creativity combined with the benefit of the architects views and what they have done before. What really stood out to us was the way they worked with the council during the planning process so we got consent for almost everything we wanted, expressing their own views but ensuring we were always leading the process and the attention to detail during the build stage. Overall we are incredibly happy with what Will and Miles helped us create and the way they led us through the whole process.   Source:Will Gamble Architects Project data Location North Hertfordshire Start on site April 2023 Completion February 2025 Gross internal floor area 320m2 Form of contract or procurement route JCT MW Building Contract. Design-Bid-Build Architect Will Gamble Architects Client Private Structural engineer Axiom Structures Principal designer Will Gamble Architects Main contractor Elite Construction #will #gamble #architects #restores #extends
    WWW.ARCHITECTSJOURNAL.CO.UK
    Will Gamble Architects restores and extends Hertfordshire farmhouse
    The farmhouse, Flint Farm, in North Hertfordshire, was in poor condition with a number of unsympathetic additions that had altered its character over the years. Will Gamble Architects was appointed to restore and extend it for a young couple who wanted to transform it into their long-term family home and improve the house’s relationship with its garden and wider farmyard setting. While the original brief had been to replace an existing conservatory with a new extension, the practice encouraged the client to extend by integrating an adjacent barn into the envelope of the reworked house, changing the way the property was used.Advertisement Existing unsympathetic extensions were removed and the internal layout was reconfigured, with a new linking element added between the barn and farmhouse. The series of internal spaces that has been created is designed to retain the character of the historic listed property. Architect’s view The barn was sensitively restored and converted into an informal living space. Its timber-framed structure was refurbished and left exposed to celebrate the historic fabric of the barn and the craftsmanship of its original construction. A contemporary picture window with parts of the historic timber frame exposed within its reveals frames a view of the garden, as well as the barn’s unique structure. The extension, that links both barn and farmhouse, is deliberately contemporary in appearance to ensure that the historic buildings remain legible. It’s low-rise, built into the sloping garden and particularly lightweight in appearance. Floor-to-ceiling glass sits on a plinth of semi-knapped flint, rooting the intervention into the garden. A ribbon of black steel, with shallow peaks and troughs hovers above. The form of this ribbon draws inspiration from the distinctive black timber-clad gables that characterise the farmhouse and the surrounding outbuildings of the old farmstead. Internally the addition’s structure is exposed, much like the historic timber framed structure of the farmhouse and the barn. The interiors are tactile, defined by texture and pattern and inspired by the characteristics of the old farmstead. Miles Kelsey, associate, Will Gamble ArchitectsAdvertisement Client’s view We bought the farmhouse as a family home to move out of our two-bed flat in north London. Will visited the farmhouse with us whilst we were working through the purchase to understand what we were looking to do and went on to support us through each stage. The farmhouse was a combination of the original 16th century timber-framed building that had been added to with unattractive, unusable, and poorly planned extensions that meant the house was completely disconnected from the garden. Will and Miles transformed the whole house including moving the front door, converting an adjacent barn and building the modern extension as our kitchen and dining room that makes the best of the garden and views. The process that Will and Miles ran was a perfect balance of what we wanted, Sophie’s specific tastes and creativity combined with the benefit of the architects views and what they have done before. What really stood out to us was the way they worked with the council during the planning process so we got consent for almost everything we wanted, expressing their own views but ensuring we were always leading the process and the attention to detail during the build stage. Overall we are incredibly happy with what Will and Miles helped us create and the way they led us through the whole process.   Source:Will Gamble Architects Project data Location North Hertfordshire Start on site April 2023 Completion February 2025 Gross internal floor area 320m2 Form of contract or procurement route JCT MW Building Contract. Design-Bid-Build Architect Will Gamble Architects Client Private Structural engineer Axiom Structures Principal designer Will Gamble Architects Main contractor Elite Construction
    0 Yorumlar 0 hisse senetleri
  • As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion

    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments

    Image credit: Disney / Epic Games

    Opinion

    by Rob Fahey
    Contributing Editor

    Published on June 13, 2025

    In some regards, the past couple of weeks have felt rather reassuring.
    We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games.
    It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games.
    If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool.
    In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companiesto protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially.
    A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool
    I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations.
    The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here.
    A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them.

    If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation.
    The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation.
    In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic.
    To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner.
    The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once
    AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form.
    That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation.
    Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products.
    Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI.
    Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues.

    Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology.
    Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry.
    Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity.
    The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights.
    Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds, the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future.
    This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees.
    The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues.
    The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background.
    Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time.
    Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business.
    The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models.
    Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.
    #faces #court #challenges #disney #universal
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments Image credit: Disney / Epic Games Opinion by Rob Fahey Contributing Editor Published on June 13, 2025 In some regards, the past couple of weeks have felt rather reassuring. We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games. It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games. If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool. In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companiesto protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations. The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them. If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation. The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation. In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic. To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner. The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form. That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation. Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products. Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI. Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues. Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology. Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry. Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity. The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights. Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds, the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future. This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees. The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues. The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background. Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time. Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business. The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models. Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions. #faces #court #challenges #disney #universal
    WWW.GAMESINDUSTRY.BIZ
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments Image credit: Disney / Epic Games Opinion by Rob Fahey Contributing Editor Published on June 13, 2025 In some regards, the past couple of weeks have felt rather reassuring. We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games. It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games. If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool. In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companies (including game studios and publishers) to protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations. The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them. If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation. The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation (at least for famous characters belonging to rich companies; if you're an individual or a smaller company, it's entirely the Wild West out there as regards your IP rights). In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic. To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner. The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form. That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation. Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products. Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI. Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues. Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology. Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry. Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity. The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights (you can infringe someone else's copyright with it, but generally can't impose your own copyright on its creations – opening careless companies up to a risk of having key assets in their game being technically public domain and impossible to protect). Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds (which is entirely valid given the highly dubious land-grab these companies have done for their training data), the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future. This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees. The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues. The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background. Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time. Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business. The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models. Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.
    0 Yorumlar 0 hisse senetleri