• BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4

    By TREVOR HOGG
    Images courtesy of Prime Video.

    For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”

    When Splintersplits in two, the cloning effect was inspired by cellular mitosis.

    “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”
    —Stephan Fleet, VFX Supervisor

    A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”

    Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed.

    Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.”

    “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.”
    —Stephan Fleet, VFX Supervisor

    The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.”

    A building is replaced by a massive crowd attending a rally being held by Homelander.

    In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.”

    In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around.

    “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”
    —Stephan Fleet, VFX Supervisor

    Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”

    The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes.

    Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.”

    Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination.

    Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.”

    “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”
    —Stephan Fleet, VFX Supervisor

    Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution.

    A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.”

    Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4.

    When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    #bouncing #rubber #duckies #flying #sheep
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splintersplits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.” #bouncing #rubber #duckies #flying #sheep
    WWW.VFXVOICE.COM
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splinter (Rob Benedict) splits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith [Previs Director], who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be [from Season 3], so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a human [is] you tend to want to give it human gestures and eyebrows. Erik Kripke [Creator, Executive Producer, Showrunner, Director, Writer] said, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep was [acting in a certain way] in one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr. (Simon Pegg) develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelander (Anthony Starr) breaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchie (Tomer Capone) hallucinates as Kimiko Miyashiro (Karen Fukuhara) goes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splinter (Rob Benedict) splits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker (Valorie Curry). “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    0 Комментарии 0 Поделились
  • HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE

    By TREVOR HOGG

    Images courtesy of Warner Bros. Pictures.

    Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon.

    “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.”
    —Talia Finlayson, Creative Technologist, Disguise

    Interior and exterior environments had to be created, such as the shop owned by Steve.

    “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”

    Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.”

    A virtual exploration of Steve’s shop in Midport Village.

    Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.”

    “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”
    —Laura Bell, Creative Technologist, Disguise

    Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack.

    Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.”

    Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!”

    A virtual study and final still of the cast members standing outside of the Lava Chicken Shack.

    “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.”
    —Talia Finlayson, Creative Technologist, Disguise

    The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.”

    Virtually conceptualizing the layout of Midport Village.

    Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.”

    An example of the virtual and final version of the Woodland Mansion.

    “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.”
    —Laura Bell, Creative Technologist, Disguise

    Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.”

    Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment.

    Doing a virtual scale study of the Mountainside.

    Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.”

    Piglots cause mayhem during the Wingsuit Chase.

    Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods.

    “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    #how #disguise #built #out #virtual
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve. “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.” #how #disguise #built #out #virtual
    WWW.VFXVOICE.COM
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “[A]s the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve (Jack Black). “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’s (Jack Black) Lava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younis [VAD Art Director] adapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay George [VP Tech] and I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols [VAD Supervisor], Pat Younis, Jake Tuck [Unreal Artist] and Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    0 Комментарии 0 Поделились
  • Hey, wonderful creators!

    Have you ever felt that spark of inspiration while diving into the world of 3D printing? Well, buckle up, because the future has just gotten even brighter! Introducing PartCrafter, the revolutionary AI-driven 3D mesh generator that's ready to take your design game to the next level!

    In a world where creativity knows no bounds, it's fascinating to see how artificial intelligence is revolutionizing the realm of 3D printing, especially in the design phase. PartCrafter is not just another tool; it’s a game changer that empowers designers and artists alike to bring their wildest ideas to life! Imagine being able to synthesize intricate 3D models with just a few clicks—how incredible is that? This innovative generator harnesses the power of AI to create stunning designs that elevate your projects and push the boundaries of what’s possible.

    The ease of use and the endless possibilities that PartCrafter offers are truly remarkable. Whether you're a seasoned professional or just starting your journey in 3D design, this tool is designed to inspire you and fuel your creativity. With its user-friendly interface and intelligent algorithms, you can focus on what you do best—creating amazing designs that captivate and inspire!

    Remember, every great invention starts with a spark of imagination! So, don't hold back! Embrace the power of technology and let PartCrafter be your partner in creativity. Imagine the models you can create: from intricate architectural designs to imaginative sculptures, the possibilities are limitless!

    And guess what? The best part is that you’re not alone on this journey! Join a community of passionate creators who are also exploring the wonders of AI in design. Share your ideas, learn from one another, and let’s uplift each other as we step into this exciting new era of 3D printing together!

    So, what are you waiting for? Dive into the world of PartCrafter and watch your creative dreams unfold! The future is now, and it’s time to create something incredible! Let’s embrace innovation and let our imaginations soar!

    #3DPrinting #ArtificialIntelligence #PartCrafter #CreativeDesign #Innovation
    🌟✨ Hey, wonderful creators! 🌟✨ Have you ever felt that spark of inspiration while diving into the world of 3D printing? Well, buckle up, because the future has just gotten even brighter! 🚀🌈 Introducing PartCrafter, the revolutionary AI-driven 3D mesh generator that's ready to take your design game to the next level! 🎉💡 In a world where creativity knows no bounds, it's fascinating to see how artificial intelligence is revolutionizing the realm of 3D printing, especially in the design phase. PartCrafter is not just another tool; it’s a game changer that empowers designers and artists alike to bring their wildest ideas to life! 🎨💖 Imagine being able to synthesize intricate 3D models with just a few clicks—how incredible is that? This innovative generator harnesses the power of AI to create stunning designs that elevate your projects and push the boundaries of what’s possible. 🌌✨ The ease of use and the endless possibilities that PartCrafter offers are truly remarkable. Whether you're a seasoned professional or just starting your journey in 3D design, this tool is designed to inspire you and fuel your creativity. 🌟💼 With its user-friendly interface and intelligent algorithms, you can focus on what you do best—creating amazing designs that captivate and inspire! Remember, every great invention starts with a spark of imagination! 🌠💭 So, don't hold back! Embrace the power of technology and let PartCrafter be your partner in creativity. Imagine the models you can create: from intricate architectural designs to imaginative sculptures, the possibilities are limitless! 🏙️✨ And guess what? The best part is that you’re not alone on this journey! Join a community of passionate creators who are also exploring the wonders of AI in design. Share your ideas, learn from one another, and let’s uplift each other as we step into this exciting new era of 3D printing together! 🤝💕 So, what are you waiting for? Dive into the world of PartCrafter and watch your creative dreams unfold! The future is now, and it’s time to create something incredible! Let’s embrace innovation and let our imaginations soar! 🌈🎉 #3DPrinting #ArtificialIntelligence #PartCrafter #CreativeDesign #Innovation
    PartCrafter, el generador de mallas 3D basado en inteligencia artificial
    Parece que la inteligencia artificial ha vuelto a demostrar su eficacia en el sector de la impresión 3D, concretamente en la fase de diseño. Un equipo ha utilizado la IA para desarrollar un generador de modelos 3D capaz de sintetizar…
    Like
    Love
    Wow
    Sad
    Angry
    308
    1 Комментарии 0 Поделились
  • In a world where the line between reality and digital wizardry is blurrier than ever, the recent revelations from the VFX wizards of "Emilia Pérez" are nothing short of a masterclass in illusion. Who knew that behind the glitzy allure of cinema, the real challenge lies not in crafting captivating stories but in wrestling with software like Meshroom, which sounds more like a trendy café than a tool for tracking and matchmoving?

    Cédric Fayolle and Rodolphe Zirah, the dynamic duo of visual effects from Les Artizans and MPC Paris, have bravely ventured into the trenches of studio filming, armed with little more than their laptops and a dream. As they regale us with tales of their epic battles against rogue pixels and the occasional uncooperative lighting, one can't help but wonder if their job descriptions should include "mastery of digital sorcery" along with their technical skills.

    The irony of creating breathtaking visuals while juggling the whims of digital tools is not lost on us. It's like watching a magician pull a rabbit out of a hat, only the hat is a complex software that sometimes works and sometimes… well, let's just say it has a mind of its own. Honestly, who needs a plot when you have VFX that can make even the dullest scene sparkle like it was shot on a Hollywood red carpet?

    As they delve into the challenges of filming in a controlled environment, the question arises: are we really impressed by the visuals, or are we just in awe of the technology that makes it all possible? Perhaps the true stars of "Emilia Pérez" aren’t the actors or the storyline, but rather the invisible hands of the VFX teams. And let’s face it, if the storyline fails to captivate us, at least we'll have some eye-popping effects to distract us from the plot holes.

    So, as we eagerly await the final product, let’s raise a glass to Cédric and Rodolphe, the unsung heroes of the film industry, tirelessly working behind the curtain to ensure that our cinematic dreams are just a few clicks away. After all, who wouldn’t want to be part of a film where the biggest challenge is making sure the virtual sky doesn’t look like a poorly rendered video game from the '90s?

    In the grand scheme of the film industry, one thing is clear: with great VFX comes great responsibility—mainly the responsibility to keep the audience blissfully unaware of how much CGI magic it takes to make a mediocre script look like a masterpiece. Cheers to that!

    #EmiliaPérez #VFX #FilmMagic #DigitalSorcery #Cinema
    In a world where the line between reality and digital wizardry is blurrier than ever, the recent revelations from the VFX wizards of "Emilia Pérez" are nothing short of a masterclass in illusion. Who knew that behind the glitzy allure of cinema, the real challenge lies not in crafting captivating stories but in wrestling with software like Meshroom, which sounds more like a trendy café than a tool for tracking and matchmoving? Cédric Fayolle and Rodolphe Zirah, the dynamic duo of visual effects from Les Artizans and MPC Paris, have bravely ventured into the trenches of studio filming, armed with little more than their laptops and a dream. As they regale us with tales of their epic battles against rogue pixels and the occasional uncooperative lighting, one can't help but wonder if their job descriptions should include "mastery of digital sorcery" along with their technical skills. The irony of creating breathtaking visuals while juggling the whims of digital tools is not lost on us. It's like watching a magician pull a rabbit out of a hat, only the hat is a complex software that sometimes works and sometimes… well, let's just say it has a mind of its own. Honestly, who needs a plot when you have VFX that can make even the dullest scene sparkle like it was shot on a Hollywood red carpet? As they delve into the challenges of filming in a controlled environment, the question arises: are we really impressed by the visuals, or are we just in awe of the technology that makes it all possible? Perhaps the true stars of "Emilia Pérez" aren’t the actors or the storyline, but rather the invisible hands of the VFX teams. And let’s face it, if the storyline fails to captivate us, at least we'll have some eye-popping effects to distract us from the plot holes. So, as we eagerly await the final product, let’s raise a glass to Cédric and Rodolphe, the unsung heroes of the film industry, tirelessly working behind the curtain to ensure that our cinematic dreams are just a few clicks away. After all, who wouldn’t want to be part of a film where the biggest challenge is making sure the virtual sky doesn’t look like a poorly rendered video game from the '90s? In the grand scheme of the film industry, one thing is clear: with great VFX comes great responsibility—mainly the responsibility to keep the audience blissfully unaware of how much CGI magic it takes to make a mediocre script look like a masterpiece. Cheers to that! #EmiliaPérez #VFX #FilmMagic #DigitalSorcery #Cinema
    Emilia Pérez : Les Artizans et MPC nous dévoilent les secrets des VFX !
    Nous vous proposons un retour en vidéo sur les effets visuels du film Emilia Pérez de Jacques Audiard, avec Cédric Fayolle (Superviseur VFX Général, Les Artizans) et Rodolphe Zirah (Superviseur VFX, MPC Paris). Le duo revient sur les défis d’un
    Like
    Love
    Wow
    Sad
    Angry
    519
    1 Комментарии 0 Поделились
  • Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon

    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey.

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations.
    Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered.
    Frontiers: What inspired you to become a researcher?
    Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved.
    F: Can you tell us about the research you’re currently working on?
    BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation.
    Local boating the Amazon River. CREDIT: Beatriz Cosendey.
    F: Could you tell us about one of the legends surrounding anacondas?
    BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty.
    F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity?
    BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals, while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently.
    A giant anaconda is being measured. Credit: Pedro Calazans.
    F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play?
    BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is herfavorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?”
    For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste.
    One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey.
    Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey.
    We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh, and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals.
    F: Are there any common misconceptions about this area of research? How would you address them?
    BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data.
    However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework.
    To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society.
    The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey.
    F: What are some of the areas of research you’d like to see tackled in the years ahead?
    BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere.
    F: How has open science benefited the reach and impact of your research?
    BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups.
    The Q&A can also be read here.
    #qampampa #how #anacondas #chickens #locals
    Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon
    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations. Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered. Frontiers: What inspired you to become a researcher? Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved. F: Can you tell us about the research you’re currently working on? BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation. Local boating the Amazon River. CREDIT: Beatriz Cosendey. F: Could you tell us about one of the legends surrounding anacondas? BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty. F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity? BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals, while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently. A giant anaconda is being measured. Credit: Pedro Calazans. F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play? BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is herfavorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?” For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste. One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey. Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey. We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh, and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals. F: Are there any common misconceptions about this area of research? How would you address them? BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data. However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework. To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society. The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey. F: What are some of the areas of research you’d like to see tackled in the years ahead? BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere. F: How has open science benefited the reach and impact of your research? BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups. The Q&A can also be read here. #qampampa #how #anacondas #chickens #locals
    WWW.POPSCI.COM
    Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon
    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations. Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered. Frontiers: What inspired you to become a researcher? Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved. F: Can you tell us about the research you’re currently working on? BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation. Local boating the Amazon River. CREDIT: Beatriz Cosendey. F: Could you tell us about one of the legends surrounding anacondas? BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty. F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity? BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals (up to around 2–2.5 meters), while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently. A giant anaconda is being measured. Credit: Pedro Calazans. F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play? BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is her [the anaconda’s] favorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?” For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste. One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey. Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey. We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh (to block smaller animals), and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals. F: Are there any common misconceptions about this area of research? How would you address them? BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data. However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework. To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society. The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey. F: What are some of the areas of research you’d like to see tackled in the years ahead? BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere. F: How has open science benefited the reach and impact of your research? BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups. The Q&A can also be read here.
    Like
    Love
    Wow
    Sad
    Angry
    443
    2 Комментарии 0 Поделились
  • MillerKnoll opens new design archive showcasing over one million objects from the company’s history

    In a 12,000-square-foot warehouse in Zeeland, Michigan, hundreds of chairs, sofas, and loveseats rest on open storage racks. Their bold colors and elegant forms stand in striking contrast to the industrial setting. A plush recliner, seemingly made for sinking into, sits beside a mesh desk chair like those found in generic office cubicles. Nearby, a rare prototype of the Knoll Womb® Chair, gifted by Eero Saarinen to his mother, blooms open like a flower–inviting someone to sit. There’s also mahogany furniture designed by Gilbert Rohde for Herman Miller, originally unveiled at the 1933 World’s Fair; early office pieces by Florence Knoll; and a sculptural paper lamp by Isamu Noguchi. This is the newly unveiled MillerKnoll Archive, a space that honors the distinct legacies of its formerly rival brands. In collaboration with New York–based design firm Standard Issue, MillerKnoll has created a permanent display of its most iconic designs at the company’s Michigan Design Yard headquarters.

    In the early 1920s, Dutch-born businessman Herman Miller became the majority stakeholder in a Zeeland, Michigan, company where his son-in-law served as president. Following the acquisition, Star Furniture Co. was renamed the Herman Miller Furniture Company. Meanwhile, across the Atlantic in Stuttgart, Germany, Walter Knoll joined his family’s furniture business and formed close ties with modernist pioneers Ludwig Mies van der Rohe and Walter Gropius, immersing himself in the Bauhaus movement as Germany edged toward war. 
    Just before the outbreak of World War II, Walter Knoll relocated to the United States and established his own furniture company in New York City. Around the same time, Michigan native Florence Schust was studying at the Cranbrook Academy of Art under Eliel Saarinen. There, she met Eero Saarinen and Charles Eames. Schust, who later married Walter Knoll, and Saarinen would go on to become key designers for the company, while Eames would play a similarly pivotal role at Herman Miller—setting both firms on parallel paths in the world of modern design.
    The facility was designed in collaboration with New York-based design firm Standard Issue. The archive, located in MillerKnoll’s Design Yard Headquarters, is 12,000 square feet and holds over one million objects.Formerly seen as competitors, Herman Miller acquired Knoll four years ago in a billion merger that formed MillerKnoll. The deal united two of the most influential names in American furniture, merging their storied design legacies and the iconic pieces that helped define modern design. Now, MillerKnoll is honoring the distinct histories of each brand through this new archive. The archive is a permanent home for the brands’ archival collections and also exhibits the evolution of modern design. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room. 

    The facility’s first exhibition, Manufacturing Modern, explores the intertwined histories of Knoll and Herman Miller. It showcases designs from the individuals who helped shape each company. The open storage area displays over 300 pieces of modern furniture, featuring both original works from Knoll and Herman Miller as well as contemporary designs. In addition to viewing the furniture pieces, visitors can kick back in the reading room, which offers access to a collection of archival materials, including correspondence, photography, drawings, and textiles.
    The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room and will be open for tours in partnership with the Cranbrook Art Academy this summer.“The debut of the MillerKnoll Archives invites our communities to experience design history – and imagine its future– in one dynamic space,” said MillerKnoll’s chief creative and product officer Ben Watson. “The ability to not only understand how iconic designs came to be, but how design solutions evolved over time, is a never-ending source of inspiration.”
    Exclusive tours of the archive will be available in July and August in partnership with the Cranbrook Art Museum and in October in partnership with Docomomo.
    #millerknoll #opens #new #design #archive
    MillerKnoll opens new design archive showcasing over one million objects from the company’s history
    In a 12,000-square-foot warehouse in Zeeland, Michigan, hundreds of chairs, sofas, and loveseats rest on open storage racks. Their bold colors and elegant forms stand in striking contrast to the industrial setting. A plush recliner, seemingly made for sinking into, sits beside a mesh desk chair like those found in generic office cubicles. Nearby, a rare prototype of the Knoll Womb® Chair, gifted by Eero Saarinen to his mother, blooms open like a flower–inviting someone to sit. There’s also mahogany furniture designed by Gilbert Rohde for Herman Miller, originally unveiled at the 1933 World’s Fair; early office pieces by Florence Knoll; and a sculptural paper lamp by Isamu Noguchi. This is the newly unveiled MillerKnoll Archive, a space that honors the distinct legacies of its formerly rival brands. In collaboration with New York–based design firm Standard Issue, MillerKnoll has created a permanent display of its most iconic designs at the company’s Michigan Design Yard headquarters. In the early 1920s, Dutch-born businessman Herman Miller became the majority stakeholder in a Zeeland, Michigan, company where his son-in-law served as president. Following the acquisition, Star Furniture Co. was renamed the Herman Miller Furniture Company. Meanwhile, across the Atlantic in Stuttgart, Germany, Walter Knoll joined his family’s furniture business and formed close ties with modernist pioneers Ludwig Mies van der Rohe and Walter Gropius, immersing himself in the Bauhaus movement as Germany edged toward war.  Just before the outbreak of World War II, Walter Knoll relocated to the United States and established his own furniture company in New York City. Around the same time, Michigan native Florence Schust was studying at the Cranbrook Academy of Art under Eliel Saarinen. There, she met Eero Saarinen and Charles Eames. Schust, who later married Walter Knoll, and Saarinen would go on to become key designers for the company, while Eames would play a similarly pivotal role at Herman Miller—setting both firms on parallel paths in the world of modern design. The facility was designed in collaboration with New York-based design firm Standard Issue. The archive, located in MillerKnoll’s Design Yard Headquarters, is 12,000 square feet and holds over one million objects.Formerly seen as competitors, Herman Miller acquired Knoll four years ago in a billion merger that formed MillerKnoll. The deal united two of the most influential names in American furniture, merging their storied design legacies and the iconic pieces that helped define modern design. Now, MillerKnoll is honoring the distinct histories of each brand through this new archive. The archive is a permanent home for the brands’ archival collections and also exhibits the evolution of modern design. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room.  The facility’s first exhibition, Manufacturing Modern, explores the intertwined histories of Knoll and Herman Miller. It showcases designs from the individuals who helped shape each company. The open storage area displays over 300 pieces of modern furniture, featuring both original works from Knoll and Herman Miller as well as contemporary designs. In addition to viewing the furniture pieces, visitors can kick back in the reading room, which offers access to a collection of archival materials, including correspondence, photography, drawings, and textiles. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room and will be open for tours in partnership with the Cranbrook Art Academy this summer.“The debut of the MillerKnoll Archives invites our communities to experience design history – and imagine its future– in one dynamic space,” said MillerKnoll’s chief creative and product officer Ben Watson. “The ability to not only understand how iconic designs came to be, but how design solutions evolved over time, is a never-ending source of inspiration.” Exclusive tours of the archive will be available in July and August in partnership with the Cranbrook Art Museum and in October in partnership with Docomomo. #millerknoll #opens #new #design #archive
    WWW.ARCHPAPER.COM
    MillerKnoll opens new design archive showcasing over one million objects from the company’s history
    In a 12,000-square-foot warehouse in Zeeland, Michigan, hundreds of chairs, sofas, and loveseats rest on open storage racks. Their bold colors and elegant forms stand in striking contrast to the industrial setting. A plush recliner, seemingly made for sinking into, sits beside a mesh desk chair like those found in generic office cubicles. Nearby, a rare prototype of the Knoll Womb® Chair, gifted by Eero Saarinen to his mother, blooms open like a flower–inviting someone to sit. There’s also mahogany furniture designed by Gilbert Rohde for Herman Miller, originally unveiled at the 1933 World’s Fair; early office pieces by Florence Knoll; and a sculptural paper lamp by Isamu Noguchi. This is the newly unveiled MillerKnoll Archive, a space that honors the distinct legacies of its formerly rival brands. In collaboration with New York–based design firm Standard Issue, MillerKnoll has created a permanent display of its most iconic designs at the company’s Michigan Design Yard headquarters. In the early 1920s, Dutch-born businessman Herman Miller became the majority stakeholder in a Zeeland, Michigan, company where his son-in-law served as president. Following the acquisition, Star Furniture Co. was renamed the Herman Miller Furniture Company. Meanwhile, across the Atlantic in Stuttgart, Germany, Walter Knoll joined his family’s furniture business and formed close ties with modernist pioneers Ludwig Mies van der Rohe and Walter Gropius, immersing himself in the Bauhaus movement as Germany edged toward war.  Just before the outbreak of World War II, Walter Knoll relocated to the United States and established his own furniture company in New York City. Around the same time, Michigan native Florence Schust was studying at the Cranbrook Academy of Art under Eliel Saarinen. There, she met Eero Saarinen and Charles Eames. Schust, who later married Walter Knoll, and Saarinen would go on to become key designers for the company, while Eames would play a similarly pivotal role at Herman Miller—setting both firms on parallel paths in the world of modern design. The facility was designed in collaboration with New York-based design firm Standard Issue. The archive, located in MillerKnoll’s Design Yard Headquarters, is 12,000 square feet and holds over one million objects. (Nicholas Calcott/Courtesy MillerKnoll) Formerly seen as competitors, Herman Miller acquired Knoll four years ago in a $1.8 billion merger that formed MillerKnoll. The deal united two of the most influential names in American furniture, merging their storied design legacies and the iconic pieces that helped define modern design. Now, MillerKnoll is honoring the distinct histories of each brand through this new archive. The archive is a permanent home for the brands’ archival collections and also exhibits the evolution of modern design. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room.  The facility’s first exhibition, Manufacturing Modern, explores the intertwined histories of Knoll and Herman Miller. It showcases designs from the individuals who helped shape each company. The open storage area displays over 300 pieces of modern furniture, featuring both original works from Knoll and Herman Miller as well as contemporary designs. In addition to viewing the furniture pieces, visitors can kick back in the reading room, which offers access to a collection of archival materials, including correspondence, photography, drawings, and textiles. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room and will be open for tours in partnership with the Cranbrook Art Academy this summer. (Nicholas Calcott/Courtesy MillerKnoll) “The debut of the MillerKnoll Archives invites our communities to experience design history – and imagine its future– in one dynamic space,” said MillerKnoll’s chief creative and product officer Ben Watson. “The ability to not only understand how iconic designs came to be, but how design solutions evolved over time, is a never-ending source of inspiration.” Exclusive tours of the archive will be available in July and August in partnership with the Cranbrook Art Museum and in October in partnership with Docomomo.
    Like
    Love
    Wow
    Sad
    Angry
    490
    0 Комментарии 0 Поделились
  • Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die

    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article
    #stanford #doctors #invent #device #that
    Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die
    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article #stanford #doctors #invent #device #that
    FUTURISM.COM
    Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die
    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article
    Like
    Love
    Wow
    Sad
    Angry
    478
    2 Комментарии 0 Поделились
  • Turning Points: Accept & Proceed

    12 June, 2025

    In our turning points series, design studios share some of the key moments that shaped their business. This week, we meet Accept & Proceed.

    Accept & Proceed is a London based brand and design studio that works with clients like NASA, Nike and LEGO.
    Founder David Johnston talks us through some of the decisions that defined his business.
    In 2006, Johnston took the leap to start his own business, armed with a good name and a willingness to bend the truth about his team…
    I’d gone through my career learning from big organisations, and one small organisation, and I felt like I wasn’t happy where I was. It was my dad who encouraged me to take a leap of faith and try and go it alone. With nothing more than a month’s wages in the bank and a lot of energy, I decided to go and set up an agency.
    That really just means giving yourself a name and starting to promote yourself in the world.
    Accept & Proceed founder David Johnston
    I think the name itself is a very important thing. I wanted something that was memorable but also layered in meaning. A name that starts with an “a” is very beneficial when you’re being listed in the index of books and things like that.
    But it became a bit of a compass for the way that we wanted to create work, around accepting the status quo for what it is, but with a continual commitment to proceed nonetheless.
    Because I didn’t have anyone to work with, in those early months I just made up email addresses of people that didn’t exist. That allowed me to cost projects up for multiple people. That’s obviously a degree of hustle I wouldn’t encourage in everyone, but it meant I was able to charge multiple day rates for projects where I was playing the role of four or five people.
    Self-initiated projects have long been part of the studio’s DNA and played a key role in building key client relationships.
    A&P by… was a brief to explore these letterforms without any commercial intent apart from the joy of creative expression. I started reaching out to illustrators and artists and photographers and designers that I really rated, and the things that started coming back were incredible.
    I was overwhelmed by the amount of energy and passion that people like Mr Bingo and Jason Evans were bringing to this.
    I think in so many ways, the answer to everything is community. I’ve gone on to work with a lot of the people that created these, and they also became friends. It was an early example of dissolving these illusionary boundaries around what an agency might be, but also expanding and amplifying your potential.
    The first of Accept & Proceed’s Light Calendars
    Then in 2006, I was trying to establish our portfolio and I wanted something to send out into the world that would also be an example of how Accept & Proceed thinks about design. I landed on these data visualisations that show the amount of light and darkness that would happen in London in the year ahead.
    I worked with a freelance designer called Stephen Heath on the first one – he is now our creative director.
    This kickstarted a 10-year exploration, and they became a rite of passage for new designers that came into the studio, to take that very similar data and express it in completely new ways. It culminated in an exhibition in London in 2016, showing ten years’ of prints.
    They were a labour of love, but they also meant that every single year we had a number of prints that we could send out to new potential contacts. Still when I go to the global headquarters of Nike in Beaverton in Portland, I’m amazed at how many of these sit in leaders’ offices there.
    When we first got a finance director, they couldn’t believe how much we’d invested as a business in things like this – we even had our own gallery for a while. It doesn’t make sense from a purely numbers mindset, but if you put things out there for authentic reasons, there are ripple effects over time.
    In 2017, the studio became a B-corp, the fourth creative agency in the UK to get this accreditation.
    Around 2016, I couldn’t help but look around – as we probably all have at varying points over the last 10 years – and wondered, what the fuck is going on?
    All these systems are not fit for purpose for the future – financial systems, food systems, relationship systems, energy systems. They’re not working. And I was like shit, are we part of the problem?
    Accept & Proceed’s work for the NASA Jet Propulsion Laboratory
    I’ve always thought of brand as a piece of technology that can fundamentally change our actions and the world around us. That comes with a huge responsibility.
    We probably paid four months’ wages of two people full-time just to get accredited, so it’s quite a high bar. But I like that the programme shackles you to this idea of improvement. You can’t rest on your laurels if you want to be re-accredited. It’s like the way design works as an iterative process – you have to keep getting better.
    In 2019, Johnston and his team started thinking seriously about the studio’s own brand, and created a punchy, nuanced new positioning.
    We got to a point where we’d proven we could help brands achieve their commercial aims. But we wanted to hold a position ourselves, not just be a conduit between a brand and its audience.
    It still amazes me that so few agencies actually stand for anything. We realised that all the things – vision, mission, principles – that we’ve been creating for brands for years, we hadn’t done for ourselves.
    It’s a bit like when you see a hairdresser with a really dodgy haircut. But it’s hard to cut your own hair.
    So we went through that process, which was really difficult, and we landed on “Design for the future” as our promise to the world.
    And if you’re going to have that as a promise, you better be able to describe the world you’re creating through your work, which we call “the together world.”
    Accept & Proceed’s work for Second Sea
    We stand at this most incredible moment in history where the latest technology and science is catching up with ancient wisdom, to know that we must become more entangled, more together, more whole.
    And we’ve assessed five global shifts that are happening in order to be able to take us towards a more together world through our work – interbeing, reciprocity, healing, resilience and liberation.
    The year before last, we lost three global rebrand projects based on our positioning. Every one of them said to me, “You’re right but we’re not ready.”
    But this year, I think the product market fit of what we’ve been saying for the last five years is really starting to mesh. We’re working with Arc’teryx on their 2030 landscape, evolving Nike’s move to zero, and working with LEGO on what their next 100 years might look like, which is mind-boggling work.
    I don’t think we could have won any of those opportunities had we not been talking for quite a long time about design for the future.
    In 2023, Johnston started a sunrise gathering on Hackney Marshes, which became a very significant part of his life.
    I had the flu and I had a vision in my dreamy fluey state of a particular spot on Hackney Marshes where people were gathering and watching the sunrise. I happened to tell my friend, the poet Thomas Sharp this, and he said, “That’s a premonition. You have to make it happen.”
    The first year there were five of us – this year there were 300 people for the spring equinox in March.
    I don’t fully know what these gatherings will lead to. Will Accept & Proceed start to introduce the seasons to the way we operate as a business? It’s a thought I’ve had percolating, but I don’t know. Will it be something else?
    One of the 2024 sunrise gatherings organised by Accept & Proceed founder David Johnston
    I do know that there’s major learnings around authentic community building for brands. We should do away with these buckets we put people into, of age group and location. They aren’t very true. It’s fascinating to see the breadth of people who come to these gatherings.
    Me and Laura were thinking at some point of moving out of London, but I think these sunrise gatherings are now my reason to stay. It’s the thing I didn’t know I needed until I had it. They have made London complete for me.
    There’s something so ancient about watching our star rise, and the reminder that we are actually just animals crawling upon the surface of a planet of mud. That’s what’s real. But it can be hard to remember that when you’re sitting at your computer in the studio.
    These gatherings help me better understand creativity’s true potential, for brands, for the world, and for us.

    Design disciplines in this article

    Brands in this article

    What to read next

    Features

    Turning Points: Cultural branding agency EDIT

    Brand Identity
    20 Nov, 2024
    #turning #points #accept #ampamp #proceed
    Turning Points: Accept & Proceed
    12 June, 2025 In our turning points series, design studios share some of the key moments that shaped their business. This week, we meet Accept & Proceed. Accept & Proceed is a London based brand and design studio that works with clients like NASA, Nike and LEGO. Founder David Johnston talks us through some of the decisions that defined his business. In 2006, Johnston took the leap to start his own business, armed with a good name and a willingness to bend the truth about his team… I’d gone through my career learning from big organisations, and one small organisation, and I felt like I wasn’t happy where I was. It was my dad who encouraged me to take a leap of faith and try and go it alone. With nothing more than a month’s wages in the bank and a lot of energy, I decided to go and set up an agency. That really just means giving yourself a name and starting to promote yourself in the world. Accept & Proceed founder David Johnston I think the name itself is a very important thing. I wanted something that was memorable but also layered in meaning. A name that starts with an “a” is very beneficial when you’re being listed in the index of books and things like that. But it became a bit of a compass for the way that we wanted to create work, around accepting the status quo for what it is, but with a continual commitment to proceed nonetheless. Because I didn’t have anyone to work with, in those early months I just made up email addresses of people that didn’t exist. That allowed me to cost projects up for multiple people. That’s obviously a degree of hustle I wouldn’t encourage in everyone, but it meant I was able to charge multiple day rates for projects where I was playing the role of four or five people. Self-initiated projects have long been part of the studio’s DNA and played a key role in building key client relationships. A&P by… was a brief to explore these letterforms without any commercial intent apart from the joy of creative expression. I started reaching out to illustrators and artists and photographers and designers that I really rated, and the things that started coming back were incredible. I was overwhelmed by the amount of energy and passion that people like Mr Bingo and Jason Evans were bringing to this. I think in so many ways, the answer to everything is community. I’ve gone on to work with a lot of the people that created these, and they also became friends. It was an early example of dissolving these illusionary boundaries around what an agency might be, but also expanding and amplifying your potential. The first of Accept & Proceed’s Light Calendars Then in 2006, I was trying to establish our portfolio and I wanted something to send out into the world that would also be an example of how Accept & Proceed thinks about design. I landed on these data visualisations that show the amount of light and darkness that would happen in London in the year ahead. I worked with a freelance designer called Stephen Heath on the first one – he is now our creative director. This kickstarted a 10-year exploration, and they became a rite of passage for new designers that came into the studio, to take that very similar data and express it in completely new ways. It culminated in an exhibition in London in 2016, showing ten years’ of prints. They were a labour of love, but they also meant that every single year we had a number of prints that we could send out to new potential contacts. Still when I go to the global headquarters of Nike in Beaverton in Portland, I’m amazed at how many of these sit in leaders’ offices there. When we first got a finance director, they couldn’t believe how much we’d invested as a business in things like this – we even had our own gallery for a while. It doesn’t make sense from a purely numbers mindset, but if you put things out there for authentic reasons, there are ripple effects over time. In 2017, the studio became a B-corp, the fourth creative agency in the UK to get this accreditation. Around 2016, I couldn’t help but look around – as we probably all have at varying points over the last 10 years – and wondered, what the fuck is going on? All these systems are not fit for purpose for the future – financial systems, food systems, relationship systems, energy systems. They’re not working. And I was like shit, are we part of the problem? Accept & Proceed’s work for the NASA Jet Propulsion Laboratory I’ve always thought of brand as a piece of technology that can fundamentally change our actions and the world around us. That comes with a huge responsibility. We probably paid four months’ wages of two people full-time just to get accredited, so it’s quite a high bar. But I like that the programme shackles you to this idea of improvement. You can’t rest on your laurels if you want to be re-accredited. It’s like the way design works as an iterative process – you have to keep getting better. In 2019, Johnston and his team started thinking seriously about the studio’s own brand, and created a punchy, nuanced new positioning. We got to a point where we’d proven we could help brands achieve their commercial aims. But we wanted to hold a position ourselves, not just be a conduit between a brand and its audience. It still amazes me that so few agencies actually stand for anything. We realised that all the things – vision, mission, principles – that we’ve been creating for brands for years, we hadn’t done for ourselves. It’s a bit like when you see a hairdresser with a really dodgy haircut. But it’s hard to cut your own hair. So we went through that process, which was really difficult, and we landed on “Design for the future” as our promise to the world. And if you’re going to have that as a promise, you better be able to describe the world you’re creating through your work, which we call “the together world.” Accept & Proceed’s work for Second Sea We stand at this most incredible moment in history where the latest technology and science is catching up with ancient wisdom, to know that we must become more entangled, more together, more whole. And we’ve assessed five global shifts that are happening in order to be able to take us towards a more together world through our work – interbeing, reciprocity, healing, resilience and liberation. The year before last, we lost three global rebrand projects based on our positioning. Every one of them said to me, “You’re right but we’re not ready.” But this year, I think the product market fit of what we’ve been saying for the last five years is really starting to mesh. We’re working with Arc’teryx on their 2030 landscape, evolving Nike’s move to zero, and working with LEGO on what their next 100 years might look like, which is mind-boggling work. I don’t think we could have won any of those opportunities had we not been talking for quite a long time about design for the future. In 2023, Johnston started a sunrise gathering on Hackney Marshes, which became a very significant part of his life. I had the flu and I had a vision in my dreamy fluey state of a particular spot on Hackney Marshes where people were gathering and watching the sunrise. I happened to tell my friend, the poet Thomas Sharp this, and he said, “That’s a premonition. You have to make it happen.” The first year there were five of us – this year there were 300 people for the spring equinox in March. I don’t fully know what these gatherings will lead to. Will Accept & Proceed start to introduce the seasons to the way we operate as a business? It’s a thought I’ve had percolating, but I don’t know. Will it be something else? One of the 2024 sunrise gatherings organised by Accept & Proceed founder David Johnston I do know that there’s major learnings around authentic community building for brands. We should do away with these buckets we put people into, of age group and location. They aren’t very true. It’s fascinating to see the breadth of people who come to these gatherings. Me and Laura were thinking at some point of moving out of London, but I think these sunrise gatherings are now my reason to stay. It’s the thing I didn’t know I needed until I had it. They have made London complete for me. There’s something so ancient about watching our star rise, and the reminder that we are actually just animals crawling upon the surface of a planet of mud. That’s what’s real. But it can be hard to remember that when you’re sitting at your computer in the studio. These gatherings help me better understand creativity’s true potential, for brands, for the world, and for us. Design disciplines in this article Brands in this article What to read next Features Turning Points: Cultural branding agency EDIT Brand Identity 20 Nov, 2024 #turning #points #accept #ampamp #proceed
    WWW.DESIGNWEEK.CO.UK
    Turning Points: Accept & Proceed
    12 June, 2025 In our turning points series, design studios share some of the key moments that shaped their business. This week, we meet Accept & Proceed. Accept & Proceed is a London based brand and design studio that works with clients like NASA, Nike and LEGO. Founder David Johnston talks us through some of the decisions that defined his business. In 2006, Johnston took the leap to start his own business, armed with a good name and a willingness to bend the truth about his team… I’d gone through my career learning from big organisations, and one small organisation, and I felt like I wasn’t happy where I was. It was my dad who encouraged me to take a leap of faith and try and go it alone. With nothing more than a month’s wages in the bank and a lot of energy, I decided to go and set up an agency. That really just means giving yourself a name and starting to promote yourself in the world. Accept & Proceed founder David Johnston I think the name itself is a very important thing. I wanted something that was memorable but also layered in meaning. A name that starts with an “a” is very beneficial when you’re being listed in the index of books and things like that. But it became a bit of a compass for the way that we wanted to create work, around accepting the status quo for what it is, but with a continual commitment to proceed nonetheless. Because I didn’t have anyone to work with, in those early months I just made up email addresses of people that didn’t exist. That allowed me to cost projects up for multiple people. That’s obviously a degree of hustle I wouldn’t encourage in everyone, but it meant I was able to charge multiple day rates for projects where I was playing the role of four or five people. Self-initiated projects have long been part of the studio’s DNA and played a key role in building key client relationships. A&P by… was a brief to explore these letterforms without any commercial intent apart from the joy of creative expression. I started reaching out to illustrators and artists and photographers and designers that I really rated, and the things that started coming back were incredible. I was overwhelmed by the amount of energy and passion that people like Mr Bingo and Jason Evans were bringing to this. I think in so many ways, the answer to everything is community. I’ve gone on to work with a lot of the people that created these, and they also became friends. It was an early example of dissolving these illusionary boundaries around what an agency might be, but also expanding and amplifying your potential. The first of Accept & Proceed’s Light Calendars Then in 2006, I was trying to establish our portfolio and I wanted something to send out into the world that would also be an example of how Accept & Proceed thinks about design. I landed on these data visualisations that show the amount of light and darkness that would happen in London in the year ahead. I worked with a freelance designer called Stephen Heath on the first one – he is now our creative director. This kickstarted a 10-year exploration, and they became a rite of passage for new designers that came into the studio, to take that very similar data and express it in completely new ways. It culminated in an exhibition in London in 2016, showing ten years’ of prints. They were a labour of love, but they also meant that every single year we had a number of prints that we could send out to new potential contacts. Still when I go to the global headquarters of Nike in Beaverton in Portland, I’m amazed at how many of these sit in leaders’ offices there. When we first got a finance director, they couldn’t believe how much we’d invested as a business in things like this – we even had our own gallery for a while. It doesn’t make sense from a purely numbers mindset, but if you put things out there for authentic reasons, there are ripple effects over time. In 2017, the studio became a B-corp, the fourth creative agency in the UK to get this accreditation. Around 2016, I couldn’t help but look around – as we probably all have at varying points over the last 10 years – and wondered, what the fuck is going on? All these systems are not fit for purpose for the future – financial systems, food systems, relationship systems, energy systems. They’re not working. And I was like shit, are we part of the problem? Accept & Proceed’s work for the NASA Jet Propulsion Laboratory I’ve always thought of brand as a piece of technology that can fundamentally change our actions and the world around us. That comes with a huge responsibility. We probably paid four months’ wages of two people full-time just to get accredited, so it’s quite a high bar. But I like that the programme shackles you to this idea of improvement. You can’t rest on your laurels if you want to be re-accredited. It’s like the way design works as an iterative process – you have to keep getting better. In 2019, Johnston and his team started thinking seriously about the studio’s own brand, and created a punchy, nuanced new positioning. We got to a point where we’d proven we could help brands achieve their commercial aims. But we wanted to hold a position ourselves, not just be a conduit between a brand and its audience. It still amazes me that so few agencies actually stand for anything. We realised that all the things – vision, mission, principles – that we’ve been creating for brands for years, we hadn’t done for ourselves. It’s a bit like when you see a hairdresser with a really dodgy haircut. But it’s hard to cut your own hair. So we went through that process, which was really difficult, and we landed on “Design for the future” as our promise to the world. And if you’re going to have that as a promise, you better be able to describe the world you’re creating through your work, which we call “the together world.” Accept & Proceed’s work for Second Sea We stand at this most incredible moment in history where the latest technology and science is catching up with ancient wisdom, to know that we must become more entangled, more together, more whole. And we’ve assessed five global shifts that are happening in order to be able to take us towards a more together world through our work – interbeing, reciprocity, healing, resilience and liberation. The year before last, we lost three global rebrand projects based on our positioning. Every one of them said to me, “You’re right but we’re not ready.” But this year, I think the product market fit of what we’ve been saying for the last five years is really starting to mesh. We’re working with Arc’teryx on their 2030 landscape, evolving Nike’s move to zero, and working with LEGO on what their next 100 years might look like, which is mind-boggling work. I don’t think we could have won any of those opportunities had we not been talking for quite a long time about design for the future. In 2023, Johnston started a sunrise gathering on Hackney Marshes, which became a very significant part of his life. I had the flu and I had a vision in my dreamy fluey state of a particular spot on Hackney Marshes where people were gathering and watching the sunrise. I happened to tell my friend, the poet Thomas Sharp this, and he said, “That’s a premonition. You have to make it happen.” The first year there were five of us – this year there were 300 people for the spring equinox in March. I don’t fully know what these gatherings will lead to. Will Accept & Proceed start to introduce the seasons to the way we operate as a business? It’s a thought I’ve had percolating, but I don’t know. Will it be something else? One of the 2024 sunrise gatherings organised by Accept & Proceed founder David Johnston I do know that there’s major learnings around authentic community building for brands. We should do away with these buckets we put people into, of age group and location. They aren’t very true. It’s fascinating to see the breadth of people who come to these gatherings. Me and Laura were thinking at some point of moving out of London, but I think these sunrise gatherings are now my reason to stay. It’s the thing I didn’t know I needed until I had it. They have made London complete for me. There’s something so ancient about watching our star rise, and the reminder that we are actually just animals crawling upon the surface of a planet of mud. That’s what’s real. But it can be hard to remember that when you’re sitting at your computer in the studio. These gatherings help me better understand creativity’s true potential, for brands, for the world, and for us. Design disciplines in this article Brands in this article What to read next Features Turning Points: Cultural branding agency EDIT Brand Identity 20 Nov, 2024
    0 Комментарии 0 Поделились
  • 6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTO

    Cases News 6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTOJune 9, 2025Last Updated: 2025-06-09We cover Phanteks’ new G370A budget case, the XT M3, and the Evolv X2 MatrixThe HighlightsPhanteks’ new X2 Matrix case has 900 LEDs and is aiming to be around Phanteks’ G370A is a case that includes 3x120mm fansThe company has a new T30-140 fan that required 6 years of engineering to makeTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Phanteks’ suite at Computex 2025 and the company showed off several cases along with a fan that took the company roughly 6 years to make.Editor's note: This was originally published on May 21, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangPhanteks Matrix CasesWe’ve talked about Phanteks’ X2 case in the past but the company was showing off its new Matrix version, which has matrix LEDs. The X2 Matrix has 900 LEDs in a 10x90 layout. It’s supposed to be about to more expensive than the base X2, which means it should end up around   The interesting thing about the case is that the LEDs wrap around the chassis. In terms of communication, the LEDs connect to the motherboard via USB 2.0 and use SATA for power. This allows Phanteks to bypass a WinRing 0 type situation. Another Matrix case had 600 of them in a 10x60 LED configuration and is supposed to be about  Phanteks also has software that allows you to reconfigure what the LEDs display. When we got to the company’s suite, it had been programmed to say, “Gamers Nexus here,” which was cool to see. We also saw that the LEDs can also be used to highlight CPU temperature. Phanteks G370A Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!Phanteks also showed off its G370A case, which is a case that includes 3x120mm fans in the front coupled with a mesh front that offers 38% hole porosity. The company tells us that manufacturing typically offers around 25% porosity.  It has a glass side panel and the back side panel of the case is just steel and has no ventilation. Taking a look at the placement of the front fans, we asked Phanteks why they weren’t higher on the case so the bottom fan could get more exposure to the bottom power supply shroud area and the answer the company gave us was simply clearance for a 360mm radiator at the top. There’s not a lot of room for the air coming into the shroud. Some of it will go through the cable pass-through if it’s empty. The back of the case features a drive mount.XTM3The company also showed off a Micro ATX case called the XTM3. It comes with 3 fans and is For its front panel, it has a unique punch out for its fans. The top panel is part standard ventilation but it does have one side that provides less airflow, which covers where the PSU would exhaust out of. The side panel does have punch-outs for the PSU, however. We don’t test power supplies, though that may change in the future. Power supplies can take a lot of thermal abuse, however, so we’re not super concerned here.  The case should be shipping in the next month or so and is 39.5 liters, which includes the feet. We appreciate that as not a lot of companies will factor that in. There’s also a lot of cable management depth on the back and the case also supports BTF. In addition, there’s a panel that clamps down all of the power supply cables. T30 FanPhanteks’ T30 fan took the company 6 years to make and is a 140mm fan. The company is competing with Noctua in the high-end fan space, but is going for a grey theme instead of brown. Phanteks CTO Tenzin Rongen Interview Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.Finally, we interviewed Phanteks CTO Tenzin Rongen to discuss technical details behind the company’s long-developed fans. Make sure to check it out in our video.
    #years #make #fan #g370a #budget
    6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTO
    Cases News 6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTOJune 9, 2025Last Updated: 2025-06-09We cover Phanteks’ new G370A budget case, the XT M3, and the Evolv X2 MatrixThe HighlightsPhanteks’ new X2 Matrix case has 900 LEDs and is aiming to be around Phanteks’ G370A is a case that includes 3x120mm fansThe company has a new T30-140 fan that required 6 years of engineering to makeTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Phanteks’ suite at Computex 2025 and the company showed off several cases along with a fan that took the company roughly 6 years to make.Editor's note: This was originally published on May 21, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangPhanteks Matrix CasesWe’ve talked about Phanteks’ X2 case in the past but the company was showing off its new Matrix version, which has matrix LEDs. The X2 Matrix has 900 LEDs in a 10x90 layout. It’s supposed to be about to more expensive than the base X2, which means it should end up around   The interesting thing about the case is that the LEDs wrap around the chassis. In terms of communication, the LEDs connect to the motherboard via USB 2.0 and use SATA for power. This allows Phanteks to bypass a WinRing 0 type situation. Another Matrix case had 600 of them in a 10x60 LED configuration and is supposed to be about  Phanteks also has software that allows you to reconfigure what the LEDs display. When we got to the company’s suite, it had been programmed to say, “Gamers Nexus here,” which was cool to see. We also saw that the LEDs can also be used to highlight CPU temperature. Phanteks G370A Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!Phanteks also showed off its G370A case, which is a case that includes 3x120mm fans in the front coupled with a mesh front that offers 38% hole porosity. The company tells us that manufacturing typically offers around 25% porosity.  It has a glass side panel and the back side panel of the case is just steel and has no ventilation. Taking a look at the placement of the front fans, we asked Phanteks why they weren’t higher on the case so the bottom fan could get more exposure to the bottom power supply shroud area and the answer the company gave us was simply clearance for a 360mm radiator at the top. There’s not a lot of room for the air coming into the shroud. Some of it will go through the cable pass-through if it’s empty. The back of the case features a drive mount.XTM3The company also showed off a Micro ATX case called the XTM3. It comes with 3 fans and is For its front panel, it has a unique punch out for its fans. The top panel is part standard ventilation but it does have one side that provides less airflow, which covers where the PSU would exhaust out of. The side panel does have punch-outs for the PSU, however. We don’t test power supplies, though that may change in the future. Power supplies can take a lot of thermal abuse, however, so we’re not super concerned here.  The case should be shipping in the next month or so and is 39.5 liters, which includes the feet. We appreciate that as not a lot of companies will factor that in. There’s also a lot of cable management depth on the back and the case also supports BTF. In addition, there’s a panel that clamps down all of the power supply cables. T30 FanPhanteks’ T30 fan took the company 6 years to make and is a 140mm fan. The company is competing with Noctua in the high-end fan space, but is going for a grey theme instead of brown. Phanteks CTO Tenzin Rongen Interview Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.Finally, we interviewed Phanteks CTO Tenzin Rongen to discuss technical details behind the company’s long-developed fans. Make sure to check it out in our video. #years #make #fan #g370a #budget
    GAMERSNEXUS.NET
    6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTO
    Cases News 6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTOJune 9, 2025Last Updated: 2025-06-09We cover Phanteks’ new G370A budget case, the XT M3, and the Evolv X2 MatrixThe HighlightsPhanteks’ new X2 Matrix case has 900 LEDs and is aiming to be around $200Phanteks’ G370A is a $60 case that includes 3x120mm fansThe company has a new T30-140 fan that required 6 years of engineering to makeTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Phanteks’ suite at Computex 2025 and the company showed off several cases along with a fan that took the company roughly 6 years to make.Editor's note: This was originally published on May 21, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangPhanteks Matrix CasesWe’ve talked about Phanteks’ X2 case in the past but the company was showing off its new Matrix version, which has matrix LEDs. The X2 Matrix has 900 LEDs in a 10x90 layout. It’s supposed to be about $30 to $40 more expensive than the base X2, which means it should end up around $200.  The interesting thing about the case is that the LEDs wrap around the chassis. In terms of communication, the LEDs connect to the motherboard via USB 2.0 and use SATA for power. This allows Phanteks to bypass a WinRing 0 type situation. Another Matrix case had 600 of them in a 10x60 LED configuration and is supposed to be about $120. Phanteks also has software that allows you to reconfigure what the LEDs display. When we got to the company’s suite, it had been programmed to say, “Gamers Nexus here,” which was cool to see. We also saw that the LEDs can also be used to highlight CPU temperature. Phanteks G370A Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work! (or consider a direct donation or a Patreon contribution!)Phanteks also showed off its G370A case, which is a $60 case that includes 3x120mm fans in the front coupled with a mesh front that offers 38% hole porosity. The company tells us that manufacturing typically offers around 25% porosity.  It has a glass side panel and the back side panel of the case is just steel and has no ventilation. Taking a look at the placement of the front fans, we asked Phanteks why they weren’t higher on the case so the bottom fan could get more exposure to the bottom power supply shroud area and the answer the company gave us was simply clearance for a 360mm radiator at the top. There’s not a lot of room for the air coming into the shroud. Some of it will go through the cable pass-through if it’s empty. The back of the case features a drive mount.XTM3The company also showed off a Micro ATX case called the XTM3. It comes with 3 fans and is $70. For its front panel, it has a unique punch out for its fans. The top panel is part standard ventilation but it does have one side that provides less airflow, which covers where the PSU would exhaust out of. The side panel does have punch-outs for the PSU, however. We don’t test power supplies, though that may change in the future. Power supplies can take a lot of thermal abuse, however, so we’re not super concerned here.  The case should be shipping in the next month or so and is 39.5 liters, which includes the feet. We appreciate that as not a lot of companies will factor that in. There’s also a lot of cable management depth on the back and the case also supports BTF. In addition, there’s a panel that clamps down all of the power supply cables. T30 FanPhanteks’ T30 fan took the company 6 years to make and is a 140mm fan. The company is competing with Noctua in the high-end fan space, but is going for a grey theme instead of brown. Phanteks CTO Tenzin Rongen Interview Visit our Patreon page to contribute a few dollars toward this website's operation (or consider a direct donation or buying something from our GN Store!) Additionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.Finally, we interviewed Phanteks CTO Tenzin Rongen to discuss technical details behind the company’s long-developed fans. Make sure to check it out in our video.
    0 Комментарии 0 Поделились