• Schedule 1 Patch Notes Includes Off-Road Skateboard

    Schedule 1, the silly-looking drug-dealing game that took the gaming community by storm a few months back, got a new patch today, and it's headlined by the addition of an off-road skateboard. It also includes some bug fixes, tweaks, and improvements, such as a change to how stamina is consumed while skateboarding.The off-road skateboard is added to the inventory on sale at the Shred Shack, where it'll cost you While minor in the grand scheme of things, it lets you live out your mountain-boarding dreams. If you're of a certain age, it might even let you reminisce about the mountain-board levels in Rocket Power: Beach Bandits for the PS2.This patch also tweaks a couple of other skateboarding-related things. First, the developer notes that it implemented some minor changes for skateboard animations. Second, stamina consumption while on a skateboard has changed from instantaneous to gradual, which will likely smooth out the skateboarding experience.Continue Reading at GameSpot
    #schedule #patch #notes #includes #offroad
    Schedule 1 Patch Notes Includes Off-Road Skateboard
    Schedule 1, the silly-looking drug-dealing game that took the gaming community by storm a few months back, got a new patch today, and it's headlined by the addition of an off-road skateboard. It also includes some bug fixes, tweaks, and improvements, such as a change to how stamina is consumed while skateboarding.The off-road skateboard is added to the inventory on sale at the Shred Shack, where it'll cost you While minor in the grand scheme of things, it lets you live out your mountain-boarding dreams. If you're of a certain age, it might even let you reminisce about the mountain-board levels in Rocket Power: Beach Bandits for the PS2.This patch also tweaks a couple of other skateboarding-related things. First, the developer notes that it implemented some minor changes for skateboard animations. Second, stamina consumption while on a skateboard has changed from instantaneous to gradual, which will likely smooth out the skateboarding experience.Continue Reading at GameSpot #schedule #patch #notes #includes #offroad
    WWW.GAMESPOT.COM
    Schedule 1 Patch Notes Includes Off-Road Skateboard
    Schedule 1, the silly-looking drug-dealing game that took the gaming community by storm a few months back, got a new patch today, and it's headlined by the addition of an off-road skateboard. It also includes some bug fixes, tweaks, and improvements, such as a change to how stamina is consumed while skateboarding.The off-road skateboard is added to the inventory on sale at the Shred Shack, where it'll cost you $1,500. While minor in the grand scheme of things, it lets you live out your mountain-boarding dreams. If you're of a certain age, it might even let you reminisce about the mountain-board levels in Rocket Power: Beach Bandits for the PS2.This patch also tweaks a couple of other skateboarding-related things. First, the developer notes that it implemented some minor changes for skateboard animations. Second, stamina consumption while on a skateboard has changed from instantaneous to gradual, which will likely smooth out the skateboarding experience.Continue Reading at GameSpot
    0 Σχόλια 0 Μοιράστηκε
  • BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4

    By TREVOR HOGG
    Images courtesy of Prime Video.

    For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”

    When Splintersplits in two, the cloning effect was inspired by cellular mitosis.

    “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”
    —Stephan Fleet, VFX Supervisor

    A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”

    Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed.

    Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.”

    “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.”
    —Stephan Fleet, VFX Supervisor

    The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.”

    A building is replaced by a massive crowd attending a rally being held by Homelander.

    In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.”

    In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around.

    “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”
    —Stephan Fleet, VFX Supervisor

    Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”

    The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes.

    Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.”

    Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination.

    Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.”

    “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”
    —Stephan Fleet, VFX Supervisor

    Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution.

    A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.”

    Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4.

    When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    #bouncing #rubber #duckies #flying #sheep
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splintersplits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.” #bouncing #rubber #duckies #flying #sheep
    WWW.VFXVOICE.COM
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splinter (Rob Benedict) splits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith [Previs Director], who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be [from Season 3], so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a human [is] you tend to want to give it human gestures and eyebrows. Erik Kripke [Creator, Executive Producer, Showrunner, Director, Writer] said, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep was [acting in a certain way] in one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr. (Simon Pegg) develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelander (Anthony Starr) breaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchie (Tomer Capone) hallucinates as Kimiko Miyashiro (Karen Fukuhara) goes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splinter (Rob Benedict) splits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker (Valorie Curry). “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    0 Σχόλια 0 Μοιράστηκε
  • Formentera20 is back, and this time it promises to be even more enlightening than the last twelve editions combined. Can you feel the excitement in the air? From October 2 to 4, 2025, the idyllic shores of Formentera will serve as the perfect backdrop for our favorite gathering of digital wizards, creativity gurus, and communication wizards. Because nothing says "cutting-edge innovation" quite like a tropical island where you can sip on your coconut water while discussing the latest trends in the digital universe.

    This year’s theme? A delightful concoction of culture, creativity, and communication—all served with a side of salty sea breeze. Who knew the key to world-class networking was just a plane ticket away to a beach? Forget about conference rooms; nothing like a sun-kissed beach to inspire groundbreaking ideas. Surely, the sound of waves crashing will help us unlock the secrets of digital communication.

    And let’s not overlook the stellar lineup of speakers they've assembled. I can only imagine the conversations: “How can we boost engagement on social media?” followed by a collective nod as they all sip their overpriced organic juices. I’m sure the beach vibes will lend an air of authenticity to those discussions on algorithm tweaks and engagement metrics. Because nothing screams “authenticity” quite like a luxury resort hosting the crème de la crème of the advertising world.

    Let’s not forget the irony of discussing “innovation” while basking in the sun. Because what better way to innovate than to sit in a circle, wearing sunglasses, while contemplating the latest app that helps you find the nearest beach bar? It’s the dream, isn’t it? It’s almost poetic how the world of high-tech communication thrives in such a low-tech environment—a setting that leaves you wondering if the real innovation is simply the ability to disconnect from the digital chaos while still pretending to be a part of it.

    But let’s be real: the true highlight of Formentera20 is not the knowledge shared or the networking done; it’s the Instagram posts that will flood our feeds. After all, who doesn’t want to showcase their “hard work” at a digital festival by posting a picture of themselves with a sunset in the background? It’s all about branding, darling.

    So, mark your calendars! Prepare your best beach outfit and your most serious expression for photos. Come for the culture, stay for the creativity, and leave with the satisfaction of having been part of something that sounds ridiculously important while you, in reality, are just enjoying a holiday under the guise of professional development.

    In the end, Formentera20 isn’t just a festival; it’s an experience—one that lets you bask in the sun while pretending you’re solving the world’s digital problems. Cheers to innovation, creativity, and the art of making work look like a vacation!

    #Formentera20 #digitalculture #creativity #communication #innovation
    Formentera20 is back, and this time it promises to be even more enlightening than the last twelve editions combined. Can you feel the excitement in the air? From October 2 to 4, 2025, the idyllic shores of Formentera will serve as the perfect backdrop for our favorite gathering of digital wizards, creativity gurus, and communication wizards. Because nothing says "cutting-edge innovation" quite like a tropical island where you can sip on your coconut water while discussing the latest trends in the digital universe. This year’s theme? A delightful concoction of culture, creativity, and communication—all served with a side of salty sea breeze. Who knew the key to world-class networking was just a plane ticket away to a beach? Forget about conference rooms; nothing like a sun-kissed beach to inspire groundbreaking ideas. Surely, the sound of waves crashing will help us unlock the secrets of digital communication. And let’s not overlook the stellar lineup of speakers they've assembled. I can only imagine the conversations: “How can we boost engagement on social media?” followed by a collective nod as they all sip their overpriced organic juices. I’m sure the beach vibes will lend an air of authenticity to those discussions on algorithm tweaks and engagement metrics. Because nothing screams “authenticity” quite like a luxury resort hosting the crème de la crème of the advertising world. Let’s not forget the irony of discussing “innovation” while basking in the sun. Because what better way to innovate than to sit in a circle, wearing sunglasses, while contemplating the latest app that helps you find the nearest beach bar? It’s the dream, isn’t it? It’s almost poetic how the world of high-tech communication thrives in such a low-tech environment—a setting that leaves you wondering if the real innovation is simply the ability to disconnect from the digital chaos while still pretending to be a part of it. But let’s be real: the true highlight of Formentera20 is not the knowledge shared or the networking done; it’s the Instagram posts that will flood our feeds. After all, who doesn’t want to showcase their “hard work” at a digital festival by posting a picture of themselves with a sunset in the background? It’s all about branding, darling. So, mark your calendars! Prepare your best beach outfit and your most serious expression for photos. Come for the culture, stay for the creativity, and leave with the satisfaction of having been part of something that sounds ridiculously important while you, in reality, are just enjoying a holiday under the guise of professional development. In the end, Formentera20 isn’t just a festival; it’s an experience—one that lets you bask in the sun while pretending you’re solving the world’s digital problems. Cheers to innovation, creativity, and the art of making work look like a vacation! #Formentera20 #digitalculture #creativity #communication #innovation
    Formentera20 anuncia los ponentes de su 12ª edición: cultura digital, creatividad y comunicación frente al mar
    Del 2 al 4 de octubre de 2025, la isla de Formentera volverá a convertirse en un punto de encuentro para los profesionales del entorno digital, creativo y estratégico. El festival Formentera20 celebrará su duodécima edición con un cartel que, un año
    Like
    Love
    Wow
    Sad
    Angry
    291
    1 Σχόλια 0 Μοιράστηκε
  • Lately, I've been seeing a lot of authors on TikTok, posting videos under the hashtag #WritersTok. Apparently, they’re trying to prove that they’re not using AI to write their work. It’s kind of funny, I guess. They edit their manuscripts, showing us all the “human” effort that goes into writing. But honestly, it feels a bit pointless.

    I mean, do we really need to see authors editing? Isn’t that something we just assume they do? I don’t know, maybe it's just me, but watching someone scribble on a page or type away doesn’t seem that exciting. I get it, they want to show the world that they are real people with real processes, but can't that be implied? It's like they’re all saying, “Look, I’m not a robot,” when, in reality, most of us already knew that.

    The whole protest against AI in writing feels a bit overblown. Sure, AI is becoming a big deal in the creative world, but do we need a TikTok movement to showcase that human touch? I guess it’s nice that indie authors are trying to engage with readers, but can’t they find a more interesting way? Maybe just write more, I don’t know.

    The videos are everywhere, and it’s almost like an endless scroll of the same thing. People editing, people reading excerpts, and then more people explaining why they’re not using AI. It’s all a bit much. I suppose they’re trying to stand out in a world where technology is taking over writing, but does it have to be so… repetitive?

    Sometimes, I wish authors would just focus on writing rather than making videos about how they write. We all know writing is hard work, and they don’t need to prove it to anyone. Maybe I’m just feeling a bit lazy about it all. Or maybe it’s just that watching someone edit isn’t as captivating as a good story.

    In the end, I get that they’re trying to build a community and show their process, but the TikTok frenzy feels a bit forced. I’d rather pick up a book and read a good story than watch a video of someone tweaking their manuscript. But hey, that’s just me.

    #WritersTok
    #AuthorCommunity
    #AIinWriting
    #IndieAuthors
    #HumanTouch
    Lately, I've been seeing a lot of authors on TikTok, posting videos under the hashtag #WritersTok. Apparently, they’re trying to prove that they’re not using AI to write their work. It’s kind of funny, I guess. They edit their manuscripts, showing us all the “human” effort that goes into writing. But honestly, it feels a bit pointless. I mean, do we really need to see authors editing? Isn’t that something we just assume they do? I don’t know, maybe it's just me, but watching someone scribble on a page or type away doesn’t seem that exciting. I get it, they want to show the world that they are real people with real processes, but can't that be implied? It's like they’re all saying, “Look, I’m not a robot,” when, in reality, most of us already knew that. The whole protest against AI in writing feels a bit overblown. Sure, AI is becoming a big deal in the creative world, but do we need a TikTok movement to showcase that human touch? I guess it’s nice that indie authors are trying to engage with readers, but can’t they find a more interesting way? Maybe just write more, I don’t know. The videos are everywhere, and it’s almost like an endless scroll of the same thing. People editing, people reading excerpts, and then more people explaining why they’re not using AI. It’s all a bit much. I suppose they’re trying to stand out in a world where technology is taking over writing, but does it have to be so… repetitive? Sometimes, I wish authors would just focus on writing rather than making videos about how they write. We all know writing is hard work, and they don’t need to prove it to anyone. Maybe I’m just feeling a bit lazy about it all. Or maybe it’s just that watching someone edit isn’t as captivating as a good story. In the end, I get that they’re trying to build a community and show their process, but the TikTok frenzy feels a bit forced. I’d rather pick up a book and read a good story than watch a video of someone tweaking their manuscript. But hey, that’s just me. #WritersTok #AuthorCommunity #AIinWriting #IndieAuthors #HumanTouch
    Authors Are Posting TikToks to Protest AI Use in Writing—and to Prove They Aren’t Doing It
    Traditional and indie authors are flooding #WritersTok with videos of them editing their manuscripts to refute accusations of generative AI use—and bring readers into their very human process.
    Like
    Love
    Wow
    Sad
    Angry
    237
    1 Σχόλια 0 Μοιράστηκε
  • Honestly, I’ve been thinking about those notification settings on the Switch 2 lately. You know, the ones that are supposed to enhance your gaming experience or whatever. Sometimes, I just want to play without being bombarded by notifications. It’s like, do I really need to know what everyone is doing all the time?

    Sure, I get that some people enjoy staying connected and knowing all the latest updates from friends. But when you’re deep into a game, the last thing you want is a ping interrupting your epic quest. I guess that’s where tweaking those notification settings comes in.

    You could turn off some notifications or even reduce the number you receive. That might help keep the distractions to a minimum. But honestly, it’s a bit of a hassle to go through all those settings. I mean, who has the energy for that? Just thinking about it makes me want to take a nap instead of adjusting my Switch 2.

    Anyway, I know it’s probably a good idea to customize the notifications for a better experience. But sometimes, I feel like I’d rather just let things be. A few less notifications might make it easier to dive into a game without losing focus. But hey, if you’re like me and can’t be bothered, that’s fine too.

    At the end of the day, it’s all about finding that balance. You don’t want to miss out on important updates, but you also don’t want your game time interrupted. So, maybe just do whatever feels right for you. Or don’t. It’s all the same, really.

    #NintendoSwitch2 #NotificationSettings #GamingExperience #Distractions #GameTime
    Honestly, I’ve been thinking about those notification settings on the Switch 2 lately. You know, the ones that are supposed to enhance your gaming experience or whatever. Sometimes, I just want to play without being bombarded by notifications. It’s like, do I really need to know what everyone is doing all the time? Sure, I get that some people enjoy staying connected and knowing all the latest updates from friends. But when you’re deep into a game, the last thing you want is a ping interrupting your epic quest. I guess that’s where tweaking those notification settings comes in. You could turn off some notifications or even reduce the number you receive. That might help keep the distractions to a minimum. But honestly, it’s a bit of a hassle to go through all those settings. I mean, who has the energy for that? Just thinking about it makes me want to take a nap instead of adjusting my Switch 2. Anyway, I know it’s probably a good idea to customize the notifications for a better experience. But sometimes, I feel like I’d rather just let things be. A few less notifications might make it easier to dive into a game without losing focus. But hey, if you’re like me and can’t be bothered, that’s fine too. At the end of the day, it’s all about finding that balance. You don’t want to miss out on important updates, but you also don’t want your game time interrupted. So, maybe just do whatever feels right for you. Or don’t. It’s all the same, really. #NintendoSwitch2 #NotificationSettings #GamingExperience #Distractions #GameTime
    Your Switch 2 Has Notification Settings You Should Tweak For A Better Experience
    Personally, I usually like receiving notifications about things so I know what’s up with the people in my life. But if you’re playing an immersive game on your fancy new Nintendo Switch 2, you may want to ensure there are no distractions. In that cas
    Like
    Love
    Wow
    Sad
    Angry
    523
    1 Σχόλια 0 Μοιράστηκε
  • Enhancing Particle Trails in UE5 #shorts

    In this quick clip, we explore how to adjust the ribbon width and scale for stunning particle trails in Unreal Engine 5. Watch as we transform our visuals with simple tweaks!#UnrealEngine #NiagaraVFX #ParticleTrails #GameDevelopment #VFXTips
    #enhancing #particle #trails #ue5 #shorts
    Enhancing Particle Trails in UE5 #shorts
    In this quick clip, we explore how to adjust the ribbon width and scale for stunning particle trails in Unreal Engine 5. Watch as we transform our visuals with simple tweaks!#UnrealEngine #NiagaraVFX #ParticleTrails #GameDevelopment #VFXTips #enhancing #particle #trails #ue5 #shorts
    WWW.YOUTUBE.COM
    Enhancing Particle Trails in UE5 #shorts
    In this quick clip, we explore how to adjust the ribbon width and scale for stunning particle trails in Unreal Engine 5. Watch as we transform our visuals with simple tweaks!#UnrealEngine #NiagaraVFX #ParticleTrails #GameDevelopment #VFXTips
    Like
    Love
    Wow
    Sad
    Angry
    501
    0 Σχόλια 0 Μοιράστηκε
  • THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It

    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you designa room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application, offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collectioncomprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed, a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok.
    #this #unexpected #rug #trend #taking
    THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It
    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you designa room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application, offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collectioncomprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed, a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok. #this #unexpected #rug #trend #taking
    WWW.HOUSEBEAUTIFUL.COM
    THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It
    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you design (or redesign) a room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application (such as functioning as a wall mural), offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collection (meaning “expression of my heart” in Hindi) comprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed (in the case of renters), a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok.
    Like
    Love
    Wow
    Sad
    Angry
    465
    2 Σχόλια 0 Μοιράστηκε
  • Apple WWDC 2025: News and analysis

    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligencestrategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news.

    Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually.

    However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AIrollouts.

    The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price.

    Follow this page for Computerworld‘s coverage of WWDC25.

    WWDC25 news and analysis

    Apple’s AI Revolution: Insights from WWDC

    June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job.

    For developers, Apple’s tools get a lot better for AI

    June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development.

    WWDC 25: What’s new for Apple and the enterprise?

    June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025.

    What we know so far about Apple’s Liquid Glass UI

    June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices. 

    WWDC first look: How Apple is improving its ecosystem

    June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conferencemight have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten.

    Apple infuses AI into the Vision Pro

    June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences.

    WWDC: Apple is about to unlock international business

    June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly. 
    #apple #wwdc #news #analysis
    Apple WWDC 2025: News and analysis
    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligencestrategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news. Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually. However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AIrollouts. The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price. Follow this page for Computerworld‘s coverage of WWDC25. WWDC25 news and analysis Apple’s AI Revolution: Insights from WWDC June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job. For developers, Apple’s tools get a lot better for AI June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development. WWDC 25: What’s new for Apple and the enterprise? June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025. What we know so far about Apple’s Liquid Glass UI June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.  WWDC first look: How Apple is improving its ecosystem June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conferencemight have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten. Apple infuses AI into the Vision Pro June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences. WWDC: Apple is about to unlock international business June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly.  #apple #wwdc #news #analysis
    WWW.COMPUTERWORLD.COM
    Apple WWDC 2025: News and analysis
    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligence (AI) strategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news. Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually. However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AI (genAI) rollouts. The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price. Follow this page for Computerworld‘s coverage of WWDC25. WWDC25 news and analysis Apple’s AI Revolution: Insights from WWDC June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job. For developers, Apple’s tools get a lot better for AI June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models (LLM) such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development. WWDC 25: What’s new for Apple and the enterprise? June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025. What we know so far about Apple’s Liquid Glass UI June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.  WWDC first look: How Apple is improving its ecosystem June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conference (WWDC) might have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten. Apple infuses AI into the Vision Pro June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences. WWDC: Apple is about to unlock international business June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly. 
    Like
    Love
    Wow
    Angry
    Sad
    391
    2 Σχόλια 0 Μοιράστηκε
  • Why Designers Get Stuck In The Details And How To Stop

    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar?
    In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap.
    Reason #1 You’re Afraid To Show Rough Work
    We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed.
    I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them.
    The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief.
    The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem.
    So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this:

    Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den.
    Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off.

    Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback.
    Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift:
    Treat early sketches as disposable tools for thinking and actively share them to get feedback faster.

    Reason #2: You Fix The Symptom, Not The Cause
    Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data.
    From my experience, here are several reasons why users might not be clicking that coveted button:

    Users don’t understand that this step is for payment.
    They understand it’s about payment but expect order confirmation first.
    Due to incorrect translation, users don’t understand what the button means.
    Lack of trust signals.
    Unexpected additional coststhat appear at this stage.
    Technical issues.

    Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly.
    Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button.
    Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers.
    There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers.
    Reason #3: You’re Solving The Wrong Problem
    Before solving anything, ask whether the problem even deserves your attention.
    During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons.
    Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned:
    Without the right context, any visual tweak is lipstick on a pig.

    Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising.
    It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours.
    Reason #4: You’re Drowning In Unactionable Feedback
    We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow.
    What matters here are two things:

    The question you ask,
    The context you give.

    That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it.
    For instance:
    “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?”

    Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?”
    Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside.
    I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory.
    So, to wrap up this point, here are two recommendations:

    Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”.
    Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it.

    Reason #5 You’re Just Tired
    Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing.
    A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity.
    What helps here:

    Swap tasks.Trade tickets with another designer; novelty resets your focus.
    Talk to another designer.If NDA permits, ask peers outside the team for a sanity check.
    Step away.Even a ten‑minute walk can do more than a double‑shot espresso.

    By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit.

    And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time.
    Four Steps I Use to Avoid Drowning In Detail
    Knowing these potential traps, here’s the practical process I use to stay on track:
    1. Define the Core Problem & Business Goal
    Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream.
    2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels.
    3. Wireframe the Flow & Get Focused Feedback
    Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions.
    4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution.
    Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering.
    Wrapping Up
    Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution.
    Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    #why #designers #get #stuck #details
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals. Unexpected additional coststhat appear at this stage. Technical issues. Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions. 4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink. #why #designers #get #stuck #details
    SMASHINGMAGAZINE.COM
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychology (like the research by Hewitt and Flett) shows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals (no security icons, unclear seller information). Unexpected additional costs (hidden fees, shipping) that appear at this stage. Technical issues (inactive button, page freezing). Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers (which might come from a fear of speaking up or a desire to avoid challenging authority) — and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B tests (a method of comparing two versions of a design to determine which performs better) showed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem (conversion drop), shared your insight (user confusion), explained your solution (cost breakdown), and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the day (about 70% of cases) compared to late in the day (less than 10%) simply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the Mechanic (Solution Principle) Once the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear context (as discussed in ‘Reason #4’) to get actionable feedback, not just vague opinions. 4. Polish the Visuals (Mindfully) I only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    Like
    Love
    Wow
    Angry
    Sad
    596
    0 Σχόλια 0 Μοιράστηκε
  • Creating a Highly Detailed Tech-Inspired Scene with Blender

    IntroductionHello! My name is Denys. I was born and raised in Nigeria, where I'm currently based. I began my journey into 3D art in March 2022, teaching myself through online resources, starting, of course, with the iconic donut tutorial on YouTube. Since then, I've continued to grow my skills independently, and now I'm working toward a career in 3D generalism, with a particular interest in environment art.I originally got into Blender because SketchUp wasn't free, and I could not keep up with the subscriptions. While searching for alternatives, I came across Blender. That's when I realized I had installed it once years ago, but back then, the interface completely intimidated me, and I gave up on it. This time, though, I decided to stick with it – and I'm glad I did.I started out creating simple models. One of my first big projects was modeling the entire SpongeBob crew. That led to my first animation, and eventually, the first four episodes of a short animated series. As I grew more confident, I began participating in online 3D competitions, like cgandwe, where I focused on designing realistic environments. Those experiences have played a huge role in getting me to where I am today.Getting Started Before starting any scene, I always look for references. It might not be the most original approach, but it's what works best for me. One piece that inspired me was a beautiful artwork by Calder Moore. I bookmarked it as soon as I saw it back in 2023, and luckily, I finally found the time to bring it to life last month.BlockoutThe goal was to match the original camera angle and roughly model the main frame of the structures. It wasn't perfect, but modeling and placing the lower docks helped me get the perspective right. Then I moved on to modeling and positioning the major structures in the scene.I gave myself two weeks to complete this project. And as much as I enjoy modeling, I also enjoy not modeling, so I turned to asset kits and free models to help speed things up. I came across an awesome paid kit by Bigmediumsmall and instantly knew it would fit perfectly into my scene.I also downloaded a few models from Sketchfab, including a lamp, desk console, freighter controls, and a robotic arm, which I later took apart to add extra detail. Another incredibly helpful tool was the Random Flow add-on by BlenderGuppy, which made adding sci-fi elements much easier. Lastly, I pulled in some models from my older sci-fi and cyberpunk projects to round things out.Kitbashing Once I had the overall shape I was aiming for, I moved on to kitbashing to pack in as much detail as possible. There wasn't any strict method to the madness; I simply picked assets I liked, whether it was a set of pipes, vents, or even a random shape that just worked in the sci-fi context. I focused first on kitbashing the front structure, and used the Random Flow add-on to fill in areas where I didn't kitbash manually. Then I moved on to the other collections, following the same process.The freighter was the final piece of the puzzle, and I knew it was going to be a challenge. Part of me wanted to model it entirely from scratch, but the more practical side knew I could save a lot of time by sticking with my usual method. So I modeled the main shapes myself, then kitbashed the details to bring it to life. I also grabbed some crates from Sketchfab to fill out the scene.Texturing This part was easily my favorite, and there was no shortcut here. I had to meticulously create each material myself. Well, I did use PBR materials downloaded from CGAmbient as a base, but I spent a lot of time tweaking and editing them to get everything just right.Texturing has always been my favorite stage when building scenes like this. Many artists prefer external tools like Substance 3D Painter, but I've learned so much about procedural texturing, especially from RyanKingArt, that I couldn't let it go. It's such a flexible and rewarding approach, and I love pushing it as far as I can.I wanted most of the colors in the scene to be dark, but I did keep the original color of the pipes and the pillars, just to add a little bit of vibrance to the scene. I also wanted the overall texture to be very rough and grungy. One of the biggest helps in achieving this was using the Grunge Maps from Substance 3D Painter. I found a way to extract them into Blender, and it helped.A major tool during the texturing phase was Jsplacement, which I used to procedurally generate sci-fi grids and plates. This was the icing on the cake for adding intricate details. Whenever an area felt too flat, I applied bump maps with these grids and panels to bring the materials to life. For example, both the lamp pole and the entire black metal material feature these Jsplacement Maps.Lighting For this, I didn't do anything fancy. I knew the scene was in a high altitude, so I looked for HDRI with a cloudless sky, and I boosted the saturation up a little to give it that high altitude look.Post-Production The rendering phase was challenging since I was working on a low-end laptop. I couldn't render the entire scene all at once, so I broke it down by collections and rendered them as separate layers. Then, I composited the layers together in post-production. I'm not big on heavy post-work, so I kept it simple, mostly tweaking brightness and saturation on my phone. That's about it for the post-production process.Conclusion The entire project took me 10 days to complete, working at least four hours each day. Although I've expressed my love for texturing, my favorite part of this project was the detailing and kitbashing. I really enjoyed piecing all the small details together. The most challenging part was deciding which assets to use and where to place them. I had a lot of greebles to choose from, but I'm happy with the ones I selected; they felt like a perfect fit for the scene.I know kitbashing sometimes gets a negative reputation in the 3D community, but I found it incredibly relieving. Honestly, this project wouldn't have come together without it, so I fully embraced the process.I'm excited to keep making projects like this. The world of 3D art is truly an endless and vast realm, and I encourage every artist like me to keep exploring it, one project at a time.Denys Molokwu, 3D Artist
    #creating #highly #detailed #techinspired #scene
    Creating a Highly Detailed Tech-Inspired Scene with Blender
    IntroductionHello! My name is Denys. I was born and raised in Nigeria, where I'm currently based. I began my journey into 3D art in March 2022, teaching myself through online resources, starting, of course, with the iconic donut tutorial on YouTube. Since then, I've continued to grow my skills independently, and now I'm working toward a career in 3D generalism, with a particular interest in environment art.I originally got into Blender because SketchUp wasn't free, and I could not keep up with the subscriptions. While searching for alternatives, I came across Blender. That's when I realized I had installed it once years ago, but back then, the interface completely intimidated me, and I gave up on it. This time, though, I decided to stick with it – and I'm glad I did.I started out creating simple models. One of my first big projects was modeling the entire SpongeBob crew. That led to my first animation, and eventually, the first four episodes of a short animated series. As I grew more confident, I began participating in online 3D competitions, like cgandwe, where I focused on designing realistic environments. Those experiences have played a huge role in getting me to where I am today.Getting Started Before starting any scene, I always look for references. It might not be the most original approach, but it's what works best for me. One piece that inspired me was a beautiful artwork by Calder Moore. I bookmarked it as soon as I saw it back in 2023, and luckily, I finally found the time to bring it to life last month.BlockoutThe goal was to match the original camera angle and roughly model the main frame of the structures. It wasn't perfect, but modeling and placing the lower docks helped me get the perspective right. Then I moved on to modeling and positioning the major structures in the scene.I gave myself two weeks to complete this project. And as much as I enjoy modeling, I also enjoy not modeling, so I turned to asset kits and free models to help speed things up. I came across an awesome paid kit by Bigmediumsmall and instantly knew it would fit perfectly into my scene.I also downloaded a few models from Sketchfab, including a lamp, desk console, freighter controls, and a robotic arm, which I later took apart to add extra detail. Another incredibly helpful tool was the Random Flow add-on by BlenderGuppy, which made adding sci-fi elements much easier. Lastly, I pulled in some models from my older sci-fi and cyberpunk projects to round things out.Kitbashing Once I had the overall shape I was aiming for, I moved on to kitbashing to pack in as much detail as possible. There wasn't any strict method to the madness; I simply picked assets I liked, whether it was a set of pipes, vents, or even a random shape that just worked in the sci-fi context. I focused first on kitbashing the front structure, and used the Random Flow add-on to fill in areas where I didn't kitbash manually. Then I moved on to the other collections, following the same process.The freighter was the final piece of the puzzle, and I knew it was going to be a challenge. Part of me wanted to model it entirely from scratch, but the more practical side knew I could save a lot of time by sticking with my usual method. So I modeled the main shapes myself, then kitbashed the details to bring it to life. I also grabbed some crates from Sketchfab to fill out the scene.Texturing This part was easily my favorite, and there was no shortcut here. I had to meticulously create each material myself. Well, I did use PBR materials downloaded from CGAmbient as a base, but I spent a lot of time tweaking and editing them to get everything just right.Texturing has always been my favorite stage when building scenes like this. Many artists prefer external tools like Substance 3D Painter, but I've learned so much about procedural texturing, especially from RyanKingArt, that I couldn't let it go. It's such a flexible and rewarding approach, and I love pushing it as far as I can.I wanted most of the colors in the scene to be dark, but I did keep the original color of the pipes and the pillars, just to add a little bit of vibrance to the scene. I also wanted the overall texture to be very rough and grungy. One of the biggest helps in achieving this was using the Grunge Maps from Substance 3D Painter. I found a way to extract them into Blender, and it helped.A major tool during the texturing phase was Jsplacement, which I used to procedurally generate sci-fi grids and plates. This was the icing on the cake for adding intricate details. Whenever an area felt too flat, I applied bump maps with these grids and panels to bring the materials to life. For example, both the lamp pole and the entire black metal material feature these Jsplacement Maps.Lighting For this, I didn't do anything fancy. I knew the scene was in a high altitude, so I looked for HDRI with a cloudless sky, and I boosted the saturation up a little to give it that high altitude look.Post-Production The rendering phase was challenging since I was working on a low-end laptop. I couldn't render the entire scene all at once, so I broke it down by collections and rendered them as separate layers. Then, I composited the layers together in post-production. I'm not big on heavy post-work, so I kept it simple, mostly tweaking brightness and saturation on my phone. That's about it for the post-production process.Conclusion The entire project took me 10 days to complete, working at least four hours each day. Although I've expressed my love for texturing, my favorite part of this project was the detailing and kitbashing. I really enjoyed piecing all the small details together. The most challenging part was deciding which assets to use and where to place them. I had a lot of greebles to choose from, but I'm happy with the ones I selected; they felt like a perfect fit for the scene.I know kitbashing sometimes gets a negative reputation in the 3D community, but I found it incredibly relieving. Honestly, this project wouldn't have come together without it, so I fully embraced the process.I'm excited to keep making projects like this. The world of 3D art is truly an endless and vast realm, and I encourage every artist like me to keep exploring it, one project at a time.Denys Molokwu, 3D Artist #creating #highly #detailed #techinspired #scene
    80.LV
    Creating a Highly Detailed Tech-Inspired Scene with Blender
    IntroductionHello! My name is Denys. I was born and raised in Nigeria, where I'm currently based. I began my journey into 3D art in March 2022, teaching myself through online resources, starting, of course, with the iconic donut tutorial on YouTube. Since then, I've continued to grow my skills independently, and now I'm working toward a career in 3D generalism, with a particular interest in environment art.I originally got into Blender because SketchUp wasn't free, and I could not keep up with the subscriptions. While searching for alternatives, I came across Blender. That's when I realized I had installed it once years ago, but back then, the interface completely intimidated me, and I gave up on it. This time, though, I decided to stick with it – and I'm glad I did.I started out creating simple models. One of my first big projects was modeling the entire SpongeBob crew. That led to my first animation, and eventually, the first four episodes of a short animated series (though it's still incomplete). As I grew more confident, I began participating in online 3D competitions, like cgandwe, where I focused on designing realistic environments. Those experiences have played a huge role in getting me to where I am today.Getting Started Before starting any scene, I always look for references. It might not be the most original approach, but it's what works best for me. One piece that inspired me was a beautiful artwork by Calder Moore. I bookmarked it as soon as I saw it back in 2023, and luckily, I finally found the time to bring it to life last month.BlockoutThe goal was to match the original camera angle and roughly model the main frame of the structures. It wasn't perfect, but modeling and placing the lower docks helped me get the perspective right. Then I moved on to modeling and positioning the major structures in the scene.I gave myself two weeks to complete this project. And as much as I enjoy modeling, I also enjoy not modeling, so I turned to asset kits and free models to help speed things up. I came across an awesome paid kit by Bigmediumsmall and instantly knew it would fit perfectly into my scene.I also downloaded a few models from Sketchfab, including a lamp, desk console, freighter controls, and a robotic arm, which I later took apart to add extra detail. Another incredibly helpful tool was the Random Flow add-on by BlenderGuppy, which made adding sci-fi elements much easier. Lastly, I pulled in some models from my older sci-fi and cyberpunk projects to round things out.Kitbashing Once I had the overall shape I was aiming for, I moved on to kitbashing to pack in as much detail as possible. There wasn't any strict method to the madness; I simply picked assets I liked, whether it was a set of pipes, vents, or even a random shape that just worked in the sci-fi context. I focused first on kitbashing the front structure, and used the Random Flow add-on to fill in areas where I didn't kitbash manually. Then I moved on to the other collections, following the same process.The freighter was the final piece of the puzzle, and I knew it was going to be a challenge. Part of me wanted to model it entirely from scratch, but the more practical side knew I could save a lot of time by sticking with my usual method. So I modeled the main shapes myself, then kitbashed the details to bring it to life. I also grabbed some crates from Sketchfab to fill out the scene.Texturing This part was easily my favorite, and there was no shortcut here. I had to meticulously create each material myself. Well, I did use PBR materials downloaded from CGAmbient as a base, but I spent a lot of time tweaking and editing them to get everything just right.Texturing has always been my favorite stage when building scenes like this. Many artists prefer external tools like Substance 3D Painter (which I did use for some of the models), but I've learned so much about procedural texturing, especially from RyanKingArt, that I couldn't let it go. It's such a flexible and rewarding approach, and I love pushing it as far as I can.I wanted most of the colors in the scene to be dark, but I did keep the original color of the pipes and the pillars, just to add a little bit of vibrance to the scene. I also wanted the overall texture to be very rough and grungy. One of the biggest helps in achieving this was using the Grunge Maps from Substance 3D Painter. I found a way to extract them into Blender, and it helped.A major tool during the texturing phase was Jsplacement, which I used to procedurally generate sci-fi grids and plates. This was the icing on the cake for adding intricate details. Whenever an area felt too flat, I applied bump maps with these grids and panels to bring the materials to life. For example, both the lamp pole and the entire black metal material feature these Jsplacement Maps.Lighting For this, I didn't do anything fancy. I knew the scene was in a high altitude, so I looked for HDRI with a cloudless sky, and I boosted the saturation up a little to give it that high altitude look.Post-Production The rendering phase was challenging since I was working on a low-end laptop. I couldn't render the entire scene all at once, so I broke it down by collections and rendered them as separate layers. Then, I composited the layers together in post-production. I'm not big on heavy post-work, so I kept it simple, mostly tweaking brightness and saturation on my phone. That's about it for the post-production process.Conclusion The entire project took me 10 days to complete, working at least four hours each day. Although I've expressed my love for texturing, my favorite part of this project was the detailing and kitbashing. I really enjoyed piecing all the small details together. The most challenging part was deciding which assets to use and where to place them. I had a lot of greebles to choose from, but I'm happy with the ones I selected; they felt like a perfect fit for the scene.I know kitbashing sometimes gets a negative reputation in the 3D community, but I found it incredibly relieving. Honestly, this project wouldn't have come together without it, so I fully embraced the process.I'm excited to keep making projects like this. The world of 3D art is truly an endless and vast realm, and I encourage every artist like me to keep exploring it, one project at a time.Denys Molokwu, 3D Artist
    0 Σχόλια 0 Μοιράστηκε
Αναζήτηση αποτελεσμάτων